One stop solution to your remote job hunt!
By signing up you get access to highly customizable remote jobs newsletter, An app which helps you in your job hunt by providing you all the necessary tools.
CoEnterprise is an award-winning B2B software and professional services company headquartered in New York City. Founded in 2010, CoEnterprise delivers Supply Chain and Business Analytics solutions and services that transform how companies connect and do business. CoEnterprise approaches each relationship and engagement from the perspective of three core values: collaboration, ownership, and excellence. We value collaboration with both our partners and clients in order to present the best possible outcome for our customers. Our vow to accept ownership ensures that our entire staff takes pride in our work and it is our commitment to excellence that ensures that this work is at the highest standard possible.
< class="h3">Job DescriptionResponsibilities
- Elicit, understand and restate complex business challenges related to realizing an organization’s advanced analytics data strategy
- Define and visualize data architecture patterns for analytics solutions, modeling both AS-IS and TO-BE data architecture topologies for both on-prem and the cloud
- Identify, propose and justify data warehousing, data modeling and analytics architectures for BI, data science, ad-hoc query analysis, data sharing and application development
- Synthesize customer analytics challenges into solutions for the Snowflake Data Cloud
- Establish confidence in recommendations via product expertise, custom product demonstrations, technical phone calls, RFP/RFI responses, product roadmap discussions, architectural topology options & business process diagrams
- Articulate and support selected recommendations consultatively and with a business-minded awareness of advancing a deal
- Partner with sales to deliver on revenue plan
- Demonstrate and advise in our core Analytics platforms including:
- Snowflake
- Tableau Desktop, Alteryx, Dataiku or equivalent BI tools
- Alteryx and/or related data preparation tools
- Demonstrate, advise and make solution recommendations similar to the following tools and skillsets:
- DataRobot, Dataiku, Databricks, Spark and related data science tools
- Python, ML/AI technology options and best practices
- The current ecosystem of data warehousing solutions including those on Azure, AWS and GCP
- API integrations
- Communicate and champion the methods, processes, and certifications which make CoEnterprise a leading provider of Cloud Analytics Services
- Engage with both internal teams and customers in a consultative and approachable manner
- Design and deliver presentation materials within established content and style parameters
Professional Skills
- Proficient in delivering software demonstrations in-person and virtually
- Proven experience working with employees at all levels of an organization
- Comfortable developing and presenting solutions
- Experience creating technical business documentation like workflow diagrams, proposals, SOWs, RFPs and RFIs, etc.
- Structured and methodical approach to creating and maintaining notes, deliverables, statements of work and other work artifacts in accordance with team standards
- Strong verbal and written communication skills
- Comfortable prioritizing and managing multiple, often competing, workstreams effectively.
- Must be a continually curious, committed, and efficient learner of new business and technology skills, highly responsive to emerging sales requirements
Other
- Willingness to travel 45% or more as needed
- 3+ years' prior experience within a mid-market or Enterprise level consulting, delivering, or selling SaaS solutions and concepts
- 3+ years building analytics solutions in the cloud, including design and delivery of data lakes, data warehouses and data marts
- 2+ years working with the Snowflake Data Cloud
- Advanced SQL skills
- Proficient coding skills in at least one of the following: Python, JavaScript, R or other data science language
- Demonstrable experience with Enterprise-class Analytics software systems like Tableau, Alteryx, and Snowflake
- Familiarity implementing solutions in at least two of the following cloud providers: AWS, Azure, Google Cloud, and IBM Cloud
- Familiarity w/system integration methods such as web services, SOAP APIs & REST APIs
- Familiarity with Advanced Analytics Applications
Come experience our spirited culture and work with a smart, dedicated and high-energy team in a stable and fast-growing company! Here is a small sample of our benefits and perks we offer:
- Comprehensive Health Insurance with generous employer contribution
- Matching 401(k) - $$$$
- Generous PTO Policy
- Virtual Team Lunches
- Wellness Program
- Monthly Mingles
- Birthday Celebrations
- Virtual Events- Happy Hours, Casino Night, Magic Show, Scavenger Hunt of National History Museum, Game Nights and more
At CoEnterprise, we believe ersity drives innovation. We are committed to creating and maintaining a workplace in which all employees have an opportunity to participate and contribute to the success of our business. In recruiting for our team, we welcome the unique contributions that you can bring. We value employees for their differences represented by a variety of dimensions including demographics, behaviors, work style and perspectives.
We are an AA/EOE employer.
Administrative Coordinator (Remote)
Job Locations: US-Remote
Requisition ID: 2022-78712
# of Openings: 1
Job Function: Clinical
Job Schedule: Regular Full-Time
Job Introduction
Maximus is currently hiring for an Administrative Coordinator. In this position, you will be performing administrative tasks to ensure compliance with all contract level requirements. This is a fully remote position with a salary range of $17-22/houly, pay is based on overall experience and qualifications.
Job Description Summary
Perform administrative tasks to ensure compliance with all contract level requirements
Job Summary
Essential Duties and Responsibilities:
- Provide customer support to internal and external customers
- Responsible for assigning and coordinating referrals for contract work to appropriate parties
- Computer data entry
- Perform all job duties in compliance with Person First standards, HIPAA guidelines, and company confidentiality policies and procedures.
- Complete assignments within established compliance standards and timelines
- Monitor multiple work queues daily to ensure cases move quickly through each process stage.
- Identify and resolve data errors
- Performs other related duties as assigned.
- Excellent written and verbal communication skills
- Excellent interpersonal and customer service skills
- Proficient in Microsoft Office Suite
- Excellent organizational skills and attention to detail
- Ability to work in a fast-paced environment
- Ability to work independently
Minimum Requirements:
- High School Degree or equivalent and 0-2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
Education and Experience Requirements
Minimum Requirements:
- High School Degree or equivalent and 2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
- Self-starter with sense of urgency and the ability to work in fast-paced, complex, and deadline-driven environment
- Strong organizational skills including time management, calendar management, scheduling, project management, records and filing and using digital resources
- Ability to complete assignments with attention to detail and a high degree of accuracy
- Strong interpersonal skills including tact, diplomacy and flexibility to work effectively with all members of the organization
- Ability to work as a team member, as well as independently
- Demonstrated ability to communicate information clearly and accurately both verbally and in writing
- Ability to exercise judgment and discretion with highly sensitive and confidential information
- Proficient with Microsoft Office (intermediate level in Excel, PowerPoint, Sharepoint, and Microsoft Outlook)
MAXIMUS Introduction
Since 1975, Maximus has operated under its founding mission of Helping Government Serve the People, enabling citizens around the globe to successfully engage with their governments at all levels and across a variety of health and human services programs. Maximus delivers innovative business process management and technology solutions that contribute to improved outcomes for citizens and higher levels of productivity, accuracy, accountability and efficiency of government-sponsored programs. With more than 30,000 employees worldwide, Maximus is a proud partner to government agencies in the United States, Australia, Canada, Saudi Arabia, Singapore and the United Kingdom. For more information, visit https://www.maximus.com.
EEO Statement
EEO Statement: Active military service members, their spouses, and veteran candidates often embody the core competencies Maximus deems essential, and bring a resiliency and dependability that greatly enhances our workforce. We recognize your unique skills and experiences, and want to provide you with a career path that allows you to continue making a difference for our country. We’re proud of our connections to organizations dedicated to serving veterans and their families. If you are transitioning from military to civilian life, have prior service, are a retired veteran or a member of the National Guard or Reserves, or a spouse of an active military service member, we have challenging and rewarding career opportunities available for you. A committed and erse workforce is our most important resource. Maximus is an Affirmative Action/Equal Opportunity Employer. Maximus provides equal employment opportunities to all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disabled status.
Pay Transparency
Maximus compensation is based on various factors including but not limited to a candidate’s education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus’s total compensation package. Other rewards may include shortand long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation shall be commensurate with job duties and relevant work experience. An applicant’s salary history will not be used in determining compensation.
What we are looking for –
A passionate, hungry, and motivated inidual that is eager for a chance to join a young startup, experiencing rapid growth. At OmniData, we are searching for a remote Senior Azure Data Engineer that has experience working on data warehousing and analytics projects, a strong technical aptitude, and the ability to provide direction for clients on the best strategies for their analytics goals. In return, we offer deep mentorship, a great work/life balance, and the opportunity to be part of creating a consulting firm that makes a difference for our clients!
What you will do –
You will work on various Big Data, Data Warehouse and Analytics projects for our world class customers. In addressing complex client needs, you will be integrated into appropriately sized and skilled teams. This will give you the opportunity to analyze requirements, develop data and analytical solutions, and execute as part of the project team, all while working with the latest tools, such as Azure Synapse Analytics and related Microsoft technologies.
Your Duties and Responsibilities –
- Contribute collaboratively to team meetings using your experience base to further the cause of innovating for OmniData clients.
- Instill confidence in the client as well as your teammates
- Work independently toward client success, at the same time knowing your own limitations and when to call on others for help.
What you must have to be considered –
- 2-3+ years of experience in Analytics and Data Warehousing on the Microsoft platform
- 2-3+ years working with Microsoft SQL Server
- Experience working with the Microsoft Azure stack (e.g. Synapse, Databricks, Data Factory, etc.)
What would be nice for you to have –
- Experience with Python
- Experience gathering requirements and working within various project delivery methodologies
- Experienced working as a customer facing consultant
- Exposure to DAX
- Strong communication skills tying together technologies and architectures to business results
- Some travel may be required (up to 20%) Post COVID 19
Benefits and Perks –
- Competitive salary and benefits commensurate with experience
- Mentorship from highly regarded industry specialists
- Exposure to the latest and greatest Microsoft technologies
- High growth potential for those with an entrepreneurial spirit.
About OmniData –
OmniData is a Portland based Data and Analytics consulting firm leveraging the Microsoft technology stack to help organizations build their Modern Data Estates, designed to serve their digital innovation needs for many years to come. To do this, we apply deep experience in Solution Architecture, Data, Analytics, and Technology to simplify the complex.
OmniData is offering you the opportunity to work with the entire lifecycle of large Data Projects, focused on next generation data warehousing, with surface points to Analytics, Machine Learning and AI. We offer a collaborative work culture, that enables you to produce client results with a safety net from your team. You will get to work closely with very experienced consultants who will be able to provide mentorship and career guidance. At the same time, you will be rewarded for learning fast and executing within our teams to provide solutions for OmniData clients
OmniData Is An Equal Opportunity Employer And All Qualified Applicants Will Receive Consideration For Employment Without Regard To Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected By Law.
Ness Digital Engineering, provides strategic IT consulting to global enterprises. Our DevOps and Infrastructure practice provides solutions, methodologies, and strategic guidance for digital transformation, containerization, and automation. Our Financial Services team offers strong domain expertise and technology acumen to deliver feature-focused solutions in Capital Markets.
We solve complex business problems with technology and insight. Our business domain knowledge, technology expertise, and Agile delivery process have delivered seamless Digital Transformations at some of the largest customers globally. We’re an AWS Premier Consulting Partner, a Premier Confluent Systems Integrator and a Snowflake Select Services Partner.
As a Data Engineer you will:
- Serve as an expert technologist in implementing ETL data pipelines, streaming data solutions, data lakes and data warehouses
- Work on the architecture, design, implementation, and testing of advanced data solutions for Ness clients
- Exhibit expertise in data modeling, data warehouses, data lakes, and building ETL data pipelines
- Modernize our clients’ data platforms, transitioning to cloud-hosted solutions using AWS, Azure, GCP, and Snowflake
Requirements
- Strong relational database skills including SQL, data modeling, and
- Experience using ETL / ELT tools such as EMR, Fivetran, Informatica
- Experience with data collection, data cleansing, and ETL processes
- UNIX/Linux skills including shell scripts and basic system administration
- Programming skills using modern programming languages like Python, Java, or JavaScript
- Cloud experience strongly preferred using AWS (Redshift, RDS, EMR, Glue) or Azure (Synapse Analytics, Azure SQL)
- Experience with data visualization tools such as PowerBI or Tableau
- Excellent verbal and written communication skills
- Ability to manage multiple projects simultaneously
Additional Desired Skills:
- Programming languages including Java, Scala, C++, C#, JavaScript, R
- Experience with software test automation
- Data warehouse technologies such as Snowflake, AWS Redshift, and/or Azure Synapse Analytics
- Experience with AWS ML technologies such as SageMaker
- Experience with big data analytics tools such as Spark or DataBricks
- Experience with streaming data analytics, Kafka, and/or Kinesis Streams
Education and Certification Requirements:
- An undergraduate degree is usually required, preferably in a STEM discipline.
- AWS or Azure certifications desirable (Solution Architect, Machine Learning or Big Data specialty certification
- Snowflake SnowPro certification desirable
Benefits
- Flexible work environment with a globally distributed team
- Competitive compensation packages including performance bonuses
- Paid vacation and sick time off
- Employer-subsidized medical, dental, and vision insurance
- Company-paid short- and long-term disability insurance
- A culture of cooperation and support
- Continual professional and personal development through employer-paid training and certifications
About the project
Spate is the machine intelligence platform used by top industry beauty brands. We analyze over 20 billion search signals to spot the next big beauty trend and help brands with their marketing/product development strategies.
As we expand to new Asian markets (Japan), we are looking to hire a Data Operations Manager to help build and manage our expanding datasets. In this role, you will be in charge of overseeing the Spate data expansion and entry process. This role will require a strong eye for detail, and a strong passion for organization and project management. We would be looking at 10 to 20 hours a week (we can be flexible depending on your availability).
Responsibilities
- Oversee data process and quality assurance for each vertical/market
- Manage relationships with the data entry team
- Analyze datasets and investigate discrepancies or inconsistencies
- Curate interesting and unique trends for Spate content; brainstorm compelling topic ideas for upcoming reports
Requirements
Minimum qualifications
- 1-3 years of experience
- Exceptional verbal and written skills
- Meticulous and organized, with a high level of attention to detail
- Proven problem-solving skills using deductive reasoning, understanding hierarchical relationships, and identifying gaps in logic
- Demonstrated project management skills and ability to manage multiple priorities
- Self-starter and ability to work independently
Preferred qualifications
- Experience in SEA/SEM and SEO
- Or Experience in CRM
- Or Experience in Copywriting
Benefits
About Spate
At Spate, we use data science to predict the next big consumer trend in beauty, personal care & food.
Spate was founded in 2018 by Yarden Horwitz & Olivier Zimmer, two ex-Googlers who led the trendspotting ision at Google and uncovered trends such as turmeric, cold brew, and face masks. Spate has been funded by the prestigious Y Combinator incubator and Initialized Capital. We currently have ~90 clients in the U.S., mainly in the beauty space from direct-to-consumer brands to big names such as L’Oréal, Estée Lauder, Unilever...
As two ex-Googlers with a passion for using data to spot new patterns in consumer behavior, and we have made it our mission to build the world’s greatest consumer trends prediction platform of all time. And not just because we want to be trendy, but because we want to help brands get better at giving consumers what they really want.
Brands waste over $200BN every year due to product launch failures and inventory waste. By spotting Turmeric, we were able to tell brands to stop wasting money on kale products and provide consumers with glorious golden milk lattes instead - because that’s what consumers want.
How do we do this? We tap into publicly available consumer data (anonymous and aggregated) to identify interesting shifts in consumer behavior. We leverage the latest available technology in ML to solve problems in ways that have never been explored before.
Why Spate?
- Join a well-funded company that is working with the top brands in consumer goods
- Work directly with the founders to set the direction of the company
- Grow in a fast-paced environment
- Always be up-to-date on the latest trends!
We enjoy a casual atmosphere, but our culture is about getting things done. We are passionate yet pragmatic when it comes to solving problems in a fast-paced environment. Our standards are high, but we thrive on working with people we respect and can learn from. We’re flexible on work styles, as long as everyone is getting their work done - and getting it done well.
We are an equal opportunity employer where our ersity and inclusion are central pillars of our company strategy. We look for applicants who understand, embrace, and thrive in a multicultural and increasingly globalized world. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Here at Hugging Face, we’re on a journey to advance good Machine Learning and make it more accessible. Along the way, we contribute to the development of technology for the better.
We have built the fastest-growing, open-source, library of pre-trained models in the world. With over 100M+ installs and 65K+ stars on GitHub, over 10 thousand companies are using HF technology in production, including leading AI organizations such as Google, Elastic, Salesforce, Algolia, and Grammarly.
About the Role
As a data engineer for vision datasets, you will work on a 3-6 months project to catalyze progress in computer vision for the open-source and research community.
The project will deal with:
- analyzing publicly available vision datasets,
- providing better access to selected datasets within the 🤗 Datasets library,
- improving vision data pre-and post-processing features within the 🤗 Datasets library,
- evaluating state-of-the-art computer vision systems on a variety of vision/image datasets.
During your project, you will closely work with the vision community. The goal is to catalyze research in computer vision by making image preprocessing as easy as possible for as many datasets as possible, as well as providing reproducible baselines for state-of-the-art computer vision systems and empowering the vision community to improve current dataset documentation practices.
About you
You'll love this internship if you are passionate about current trends in computer vision and view sharing your work with the research community as a necessity.
You should be well-versed in Python, have some experience in image preprocessing, and not be (too) afraid to process multiple terabytes of image data on a daily basis. Experience with some tabular data libraries, e.g. Apache Arrow, as well as open-source contributions and the ability to communicate feature requests to a erse open-source community are a plus! It is advantageous if you are comfortable working remotely as most of our collaborations are conducted in a remote setting.
We encourage students enrolled in university (Ph.D., Master, or Bachelor), data scientists, and ML/Data engineers looking for new opportunities to apply for this internship.
More about Hugging Face
We are actively working to build a culture that values ersity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community, as well as the future of machine learning more broadly. Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status.
We value development. You will work with some of the smartest people in our industry. We are an organization that has a bias for impact and is always challenging ourselves to grow continuously. We provide all employees with reimbursement for relevant conferences, training, and education.
We care about your well-being. We offer flexible working hours and remote options. We offer health, dental, and vision benefits for employees and their dependents. We also offer parental leave and unlimited paid time off.
We support our employees wherever they are. While we have office spaces in NYC and Paris, we're very distributed, and all remote employees have the opportunity to visit our offices. If needed, we'll also outfit your workstation to ensure you succeed.
We want our teammates to be shareholders. All employees have company equity as part of their compensation package. If we succeed in becoming a category-defining platform in machine learning and artificial intelligence, everyone enjoys the upside.
GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.
GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Analyst, you will:
- collaborate with stakeholders (Go-To-Market and Product) to define KPIs and success metrics to maximize Company and Team level performance
- discover and explain trends across data sources, potential opportunities for growth or improvement
- being a top contributor to our data warehouse using Dbt and BigQuery
- being able to perform advanced product analysis using Amplitude
- design and develop comprehensive dashboards to provide self-serve analytics solutions to different stakeholders across the business
- create trainings and documentation to ensure adoption of our data solutions and recommendations
- shaping the dynamics and processes of the Data team, including our onboarding, hiring, and team goals
- having a full understanding of the data stack
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to get insights from product analytics data
- your ability to jump into any GitBook members shoes to understand their needs
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🌍 Location
Remote
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 Join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)
GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Engineer, you will be involved in:
- own our data stack to maintain and improve it
- bring best practices from software engineering to empower the data team
- being a top contributor to our growing data warehouse using Dbt and BigQuery
- being interested in solving business questions more than setting infrastructure
- empower everyone in the company by making data available to them
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:
- your ability to build and maintain any data layer, as you have past experience in the data enginery field
- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)
Who We Are
TetraScience is the Scientific Data Cloud company with a mission to accelerate scientific discovery and improve and extend human life. The Scientific Data Cloud is the only open, cloud-native platform purpose-built for science that connects lab instruments, informatics software, and data apps across the biopharma value chain and delivers the foundation of harmonized, actionable scientific data necessary to transform raw data into accelerated and improved scientific outcomes. Through the Tetra Partner Network, market-leading vendors access the power of our cloud to help customers maximize the value of their data.
What You Will Do
- Own, prototype, and implement customer solutions
- Research and prototype data acquisition strategy for scientific lab instrumentation
- Research and prototype file parsers for instrument output files (.xlsx, .pdf, .txt, .raw, .fid, many other vendor binaries)
- Design and build data models
- Design and build Python data pipelines, unit tests, integration tests, and utility functions
- Work with the customer to test and make sure the solution fulfills their requirements and solves their need
- Coordinate project kickoff meetings; manage the customer relationship throughout the project, and conduct formal project closeout meetings
- Facilitate internal project post-mortems to identify areas of improvement on the next implementation
Requirements
What You Have Done
- 2+ years in Python and SQL
- Passionate about science and building solutions to make the data more accessible to the end-users
- Undergraduate or graduate degree in chemistry, biology, computer science, statistics, public health, etc.
- Wet lab experience or experience with scientific instruments is a strong plus
- Excellent communications skills, attention to details, and the confidence to take control of project delivery
- Quickly understand a highly technical product and effectively communicate with product management and engineering
- Strong problem-solving skills
- Intellectually curious: Unwavering drive to learn and know more every day
- Ability to think creatively on how to solve projects risks without reducing quality
- Team player and ability to "roll up your sleeves" and do what it takes to make the team successful
Benefits
- 100% employer-paid benefits for all eligible employees and immediate family members
- Unlimited paid time off (PTO)
- 401K
- Flexible working arrangements - Remote work + office as needed
- Company paid Life Insurance, LTD/STD
No visa sponsorship is available for this position
- As the Business Analyst, lead an agile team to deliver forecasted sprint goals, and solve problems efficiently and completely, according to principals of scrum development.
- Work closely with stakeholders to create and maintain a product backlog according to business value or ROI.
- Lead team sprints and road mapping processes
- Assess value, develop cases, and prioritize stories, epics, and themes to ensure work focuses on those with maximum value that are aligned with product strategy
- Provide vision and direction to the Agile development team and stakeholders.
- Keep abreast with Agile/Scrum best practices and new trends
- Experience owning a product delivery cycle and delivering software solutions.
- Manage and respond promptly and professionally to defect reports. Aid support personnel as needed to determine system problems.
- Take responsibility for and lead new initiatives in content accuracy and quality.
- Focus the team towards utilizing standards/metrics that provide guidance and feedback.
- Oversee current state process capture, identify, recommend and implement process re-engineering to support automation.
- Responsible for delivering ad hoc projects to support leadership, working with the product managers on strategic initiatives.
- Ensure that all content released to IMO Clients meets our high standards and expectations.
- Acquire working knowledge of IMO terminology solutions and related technology tools.
- At least 3 years of software Business Analyst working with an Agile team or equivalent experience.
- Proficient in gathering business requirements, process flows and use cases.
- In-depth knowledge of Agile process and principles.
- Proficiency in Microsoft Office applications and experience with JIRA/Confluence/Aha! Software a plus.
- Outstanding communication, presentation, and leadership skills.
- Sharp analytical and problem-solving skills.
- Excellent planning, organizational, and time management skills.
The Last Mile (TLM) is looking for a Senior Manager of Research and Analytics dedicated to supporting TLM's mission through organizational learning and growth. This person will be responsible for the development of TLM's research and evaluation processes and projects across in-prison and reentry programs. The Senior Manager of Research and Analytics will ensure that a consistent culture of reflection and learning exists throughout the organization to inform the current and future iterations of our programming.
< class="h2">Responsibilities:- Lead all efforts in designing and managing our internal program performance measurement work
- Analyze all indicators and outcomes to help leadership team make appropriate strategic and programmatic decisions
- Work with department leads to ensure that all data is correctly gathered and inputted on a regular schedule by all programming staff
- Implement any other necessary evaluation methods as appropriate (e.g., personal development assessment surveys, focus groups, photo journaling, etc.)
- Research best practice implications based on evaluation findings
- Develop partnerships with external evaluators (e.g., university research partners) and other organizations that can help advance our evaluation and research capacities, serving as the liaison with such collaborators
- Conduct research and help develop recommendations related to various policy opportunities aimed at improving outcomes for justice impacted folks
- Translate and disseminate evaluation results (e.g., writing policy briefs and publications) to help shift policy discussions based on promising and best practices revealed
- Lead the process of creating and facilitating necessary trainings for staff regarding data collection and management.
- Prior background in evaluation work required
- Experience designing and implementing Randomized Control Trials or other large-scale research projects preferred
- MA or equivalent work experience in research-related field
- Experience with Salesforce administration preferred
- Excellent oral and writing skills
- Strong organizational skills and demonstrated high-level strategic thinking and planning
- Embrace challenges and new responsibilities with creativity, initiative, and self-direction
- Experience in justice reform or non-profit settings preferred
- Empathy, passion and understanding of TLM's mission, pillars, and community interests
Since its founding, Intuition Machines has been on the forefront of innovation, leveraging and developing new technologies to solve complex problems. Our team, composed of leading researchers and developers, are constantly innovating toward an improved future fueled by the promise of privacy, security, and performance. We work in a casual and fast-paced environment, with a team distributed around the world, hundreds of millions of users, and a rapidly growing customer base and product suite.
Join us as we transform security and machine learning online.
As a Lead Data Engineer you will be responsible for technical leadership of data engineering projects. You will do that by designing and improving high throughput data pipelines, promoting best practices in terms of high performance data processing, infrastructure setup and development process. You will have the ability to shape the data engineering capabilities for state of the art, large scale security and machine learning products.
< class="h3">What will you do:
- Lead the data engineering initiatives and projects
- Design, document and build scalable data infrastructure
- Collaborate with software engineers, ML engineers, product managers and growth teams
- Set up data quality, monitoring and alerting infrastructure
- Ensure high performance and availability of our data infrastructure
- Proven experience in designing and implementing end to end data solutions (+5 years of experience)
- Strong python programming skills with emphasis on clean, readable and testable code
- Experience with high throughput data systems and streaming architectures
- Experience working with Kafka infrastructure and applications
- Solid understanding of OLAP databases (preferably Clickhouse)
- Hands on experience with Kubernetes
- Familiarity with public cloud providers (AWS or Azure)
- Familiarity with security frameworks, attack vectors, botnets
- Experience working with IaC and GitOps solutions
- Experience with monitoring, observability and data quality tools
- Experience with exploratory data analysis and data science solutions
- Ability to work with cutting-edge technology
- Fully remote position with flexible working hours #Li-remote
- An inspiring team spread all over the world
- A unique chance of being a part of #hCaptcha revolution
RevenueCat makes building, analyzing and growing mobile subscriptions easy. We launched as part of Y Combinator's summer 2018 batch and today are handling more than $1.2B of in-app purchases annually across thousands of apps.
We are a mission driven, remote-first company that is building the standard for mobile subscription infrastructure. Top apps like VSCO, Notion, and ClassDojo count on RevenueCat to power their subscriptions at scale.
Our 50 team members (and growing!) are located all over the world, from San Francisco to Madrid to Taipei. We're a close-knit, product-driven team, and we strive to live our core values: Customer Obsession, Always Be Shipping, Own It, and Balance.
We’re looking for a Staff Data Engineer to join our newly formed data engineering team. As a Staff Engineer, you will be responsible for leading the effort to design, architect and support our entire data platform and will play a key role in defining how our systems evolve as we scale.
< class="h3">About you:- You have 8+ years of software engineering experience.
- You have 5+ years of experience working with and building enterprise-scale data platforms.
- You have excellent command of at least one of the mainstream programming languages and some experience with Python.
- You have helped define the architecture, data modeling, tooling, and strategy for a large-scale data processing system, data lakes or warehouses.
- You have used workflow management tools (eg: airflow, glue) and have experience maintaining the infrastructure that supports these.
- You have hands-on experience building CDC-based (Change Data Capture) ingestion pipelines for highly transactional databases. Experience with Postgres and logical replication is a plus.
- You have a strong understanding of modern data processing paradigms and tooling, OLTP & OLAP database fundamentals.
- Dimensional modeling and reporting tools like Looker are a plus, but not required
- You have experience building streaming/real-time data pipelines from a batch architecture approach.
- Help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time
- Help the team apply software engineering best practices to our data pipelines (testing, data quality, etc)
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, using SQL and AWS technologies
- Clearly define data ownership & responsibility, audit and compliance framework, and general security of the data lake
- Partner with product managers, data scientists, and engineers across teams to solve problems that require data
- Drive the evolution of our data platform to support our data processing needs and provide frameworks and services for operating on the data
- Analyze, debug and maintain critical data pipelines
- Work with our core infrastructure team to create and improve frameworks that allow derived data to be used in production environments
- Contribute to standards that improve developer workflows, recommend best practices, and help mentor junior engineers on the team to grow their technical expertise
- Get up to speed on our architecture and learn the problem domain
- Understand our current data requirements and where things stand today
- Gain understanding of our current data pipelines
- Work with your team to help design and architect our data platform
- Work with product managers, engineers and data scientists to help come up with a plan to gain consensus on the approach
- Analyze, debug and maintain critical data pipelines
- Develop thorough understanding of our data platform
- Know all the major components of our system and be able to debug complex issues
- Be able detect bottlenecks, profile, and come up with enhancements
- Start participating in hiring for the company
- Thoroughly understand our data processing needs and able to spec, architect, and build solutions accordingly
- Mentor other engineers joining the team
- $218,000 to $245,000 USD salary regardless of your location
- Competitive equity in a fast-growing, Series B startup backed by top tier investors including Y Combinator
- 10 year window to exercise vested equity options
- Fully remote work environment that promotes autonomy and flexibility
- Suggested 4 to 5 weeks time off to recharge and focus on mental, physical, and emotional health
- $2,000 USD to build your personal workspace
- $1,000 USD annual stipend for your continuous learning and growth
Data Entry – Quality Assurance Specialist
Location: US National – Virtual
Full-Time
The Data Entry-Quality Assurance Specialist in our Customer Boarding Department is responsible for reviewing merchant customer data in multiple databases to identify any inconsistencies that need to be corrected for newly boarded accounts. Success in this role requires a strong attention to detail while working in a fast paced environment.
A Quality Assurance Specialist in our Customer Boarding Department is responsible for maintaining data integrity while adhering to company policies and practices. This position is accountable for completing detailed data analysis of newly boarded customer accounts and identifying inconsistencies in company CRM that need to be corrected. Success in this role requires an energetic, solution solving inidual with strong attention to detail while consistently meeting service levels. This position is key in building customer loyalty and ensuring revenue integrity.
What You’ll do:
- Review newly approved merchant account details to ensure and maintain data integrity within NAB systems, Global systems and/or First Data systems
- Verify that key data information is accurate in all systems matching the merchant application
- Identify and execute corrections for inaccurate information timely and accurately
- Daily support of the Customer Boarding call queue
- At a high level, manage and organize productivity through the effective use of all available resources including database systems and query reports
- Provide positive customer experiences while maintaining a high degree of ethical behavior in all aspects of daily busines
- Build and maintain strong working relationships with all NAB and TMS employees and departments by keeping a positive attitude and a collaborative focus in all interactions
- Adhere to company policies as defined
- Accept and complete assignment with open, cooperative, positive, and team oriented attitudes
- Perform special projects as assigned
What we Need from you:
- High School Diploma or G.E.D.
- Payment industry experience preferred
- Strong attention to detail
- Ability to communicate feedback, information, and directions both verbally and written
- The ability to work at speed and with accuracy
- Excellent organizational skills, multi-tasking and prioritization in a fast-paced work environment
- Thorough understanding of rates and fees preferred
- Strong customer service skills
- Flexibility in work schedule to accommodate business needs
- Ability to problem solve and de-escalate upset customers
- Inspiring and positive attitude
At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
Data Scientist
Health IQ has set upon itself to completely change the way seniors choose their healthcare plans by using AI/ data science, and world class user experience to bring transparency, objectivity, and intelligence to the insurance purchasing process. We need a Data Scientist to help this initiative end to end and establish Health IQ as a leader in the digital insurance market.
At Health IQ, Data Scientist uses predictive analytics and innovative machine learning models to create value from data. This role is at the heart of finding and proving innovative solutions and is responsible for developing and driving strategic modeling initiatives while maintaining a close partnership with IT to ensure that our models can be deployed quickly and monitored in a flexible deployment framework.
As a Data Scientist, you will serve as a technical and thought leader on this erse and highly skilled team. You will design and develop inventive solutions to drive innovation and the delivery of organizational value. You’ll synthesize large datasets and solve complex problems by using advanced machine learning and statistical modeling. You’ll work in a highly collaborative, team environment, guiding and mentoring junior data scientists and collaborating with multiple stakeholders. You will assist management in the communication of insights and the implementation of impactful data science solutions across the organization.
You will deliver actionable insights from your models that can be incorporated into existing Health IQ products and new programs. The ideal candidate for this role will have a passion for creating solutions, an attitude of creativity, and continual learning.
What you will be doing:
- Build core analytical models that drive Health IQ digital insurance products.
- Bring core domain expertise about Medicare, health insurance and population insights.
- Communicate complex quantitative analyses in a clear, precise, and actionable manner to management and executive-level audiences while building relationships with their partners
- Collaborate with business leaders to understand business opportunities and formulate analytical solutions for problem-solving, working alongside other analytic iniduals and team
- Design innovative algorithms and machine-learning approaches for handling some of the most challenging and exciting datasets in today’s insurance industry
- Provide thought leadership on the practical application of machine learning and advanced analytical methods and cultivate a data-driven culture across the company
- Deliver clean, reusable, and scalable code
- Work closely with Data & Engineering to deploy models
What we’re looking for:
- Master’s Degree in computer science, Math, Statistics, Economics or in any technical field that provides a solid basis for analytics is required. Masters with relevant experience acceptable
- 2+ years of experience in data science, statistics, computer science, or mathematics where you designed, developed, evaluated, and deployed predictive modeling, machine learning, and advanced analytics
- End to end experience from data wrangling to model deployment delivering added value with varying levels of ambiguity
- Extensive experience solving analytical problems using quantitative and qualitative approaches especially related to Medicare, healthcare insurance plans, and/ or senior focused population insights.
- Experience with state-of-the-art techniques in machine learning algorithms, including deep neural networks, NLP, dimensionality reduction, ensemble methods, graph algorithms
- Excellent communication skills and experience in working with stakeholders
- Strong prioritization skills while being dynamic and agile
- Ability to advise one or more areas, programs, or functions
To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
< class='"content-conclusion"'>To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
- Career Growth
As a rapidly growing company, new opportunities for growth and development continue to become available. We believe in promoting from within, and look to reward high performing employees with new opportunities.
- Celebration
We believe the key is to celebrate those who have improved their health rather than cajole those who haven’t. We look for employees who take this positive and optimistic view in their work lives.
- Service to Seniors
Our whole mission and vision is to serve seniors to improve their health. We want employees who believe true happiness comes from being in service to others. We call these employees Health Heroes.
- Personal Responsibility
We believe that only you can make the decision to improve your own health and no one else can do this for you. We look for employees that tend to do the same.
- Excellent benefits
Competitive rates for our employees' costs toward medical, dental and vision insurance. We offer a 401K, and pay 100% of your life insurance benefit option! We also offer various Flexible Spending Account (FSA) benefits to meet you and/or your families needs. Only full-time employees are eligible for benefits.
- Join a Remote-first Culture
Our flexible, totally remote environment allows us to hire top talent throughout the U.S. The world has changed, and we’ve learned that being in an office is no longer the best way for our employees and our company to thrive.
We’re building a better health system! At Nice, we’re making healthcare accessible by delivering integrated primary, musculoskeletal, and mental health care to patients when they want it through a combination of in-home and virtual visits while also improving the quality of care by eliminating the complexity, poor management, and time constraints that hold clinicians captive.
Building a better health system for all requires the input and perspectives of all. Nice actively seeks a mixture of beliefs, backgrounds, education, and point of view to help us drive better, more informed design and business decisions. Nice is committed to building a erse, inclusive, and equitable workforce and we diligently provide equal employment opportunities for all applicants and employees.
Product
The Product team at Nice Healthcare has an exciting opportunity for a Staff Data Analyst. In this role, you’ll wrangle our data, nurture our data analysis program, and partner with leaders to interpret and apply data that will drive us forward. As a Product team - made up of product managers, designers, researchers, and data analysts - we believe in succeeding as a team and setting measurable goals to guide our work together. We will cultivate your career through the investment of time and materials in the discovery of your career path.
What you’ve done before:
- You have significant experience with healthcare data and tools, including medical records, claims datasets, ICD-10 codes, CPT codes, and groupers (more than 3 years)
- You are familiar with the role of data on product teams, and in a startup or midsize business
- You have strong expertise in at least Python or R, as well as SQL
- You excel at your role by seeking to understand the business and its users
- You want to work on a wide range of problems and questions
What you’ll do at Nice:
- Work with medical record and claims datasets to accurately gage the efficacy of various care models and interventions
- Become the caretaker of our existing datasets - keep them tidy and easy to leverage
- Transform and mine our data in support of key questions and decision points
- Nurture the development and iteration of our KPIs and other important metrics
- Collaborate with our internal engineers to capture data thoroughly and thoughtfully
- Build clean and informative dashboards and visualizations
- Spend time learning from and alongside other leaders at Nice
- Proactively derive and communicate insights to stakeholders
- Support key product decisions by investigating and representing data throughout the software development lifecycle
- Define best practices around how we capture data, how we organize it and how we use it
- Ensure the privacy and security of our data in collaboration with IT leadership
- Consult on methods of data collection and experimentation
- Actively mentor other analysts on the team
- Support the direction and prioritization of other analysts’ work (pulling reports, visualization, monthly reporting preparation).
What Nice offers you:
- 100% remote work environment -- work from anywhere in the U.S.
- Company paid Medical, Dental, Vision and Life Insurance
- Competitive salary
- 25 Days of PTO – that we actively encourage you to use
- Two "No Meeting" days every week
- Growth and development opportunities
- Personal enrichment & wellness stipend
- 401k with a 3% employer contribution
- Personal and family use of Nice Healthcare (in eligible cities)
- The nicest team members and work environment
- And much more!
About Nice Healthcare
Nice Healthcare is a technology-enabled full-service primary care clinic without a physical location that treats our patients in the comfort of their homes with in-person visits or online video calls.
We foster an open and supportive company culture that values the input and ideas of all team members no matter their role. We are an innovative company in that we are revolutionizing the way patients receive primary care services and we don’t settle for the status quo - we are always implementing new processes and technology to make our work more efficient and productive.
We are committed to building a workforce that is erse and inclusive. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Title: Data Entry Specialist
Location: United States
JOB SNAPSHOT
- Employee Type: Full-Time
- Location: Work From Home
JOB DESCRIPTION
- Remote positions available – temporary
- $17.00 per hour
Founded in 1980, MultiPlan is the industry’s most comprehensive provider of healthcare cost management solutions. We provide the most comprehensive portfolio of cost management solutions; helping payers manage the cost of care.
We are seeking multiple temporary associates for an Intake position.
- This is a temporary position which is expected to last 60 to 120 days.
- The expected start date is August 16, 2022.
- A training class is provided during the first week of employment. New employees will need to work 8:00 to 4:30 pm CT during the training.
- After training, there is flexibility in the work schedule
Responsibilities:
- Intake and create cases in all applicable systems.
- Perform timely data entry of necessary information
- Research appropriate systems to identify data needed to complete cases.
- Ensure compliance with HIPAA regulations and requirements.
- Demonstrate Company’s Core Competencies and values held within
- Please note due to the exposure of PHI sensitive data – this role is considered to be a High Risk Role.
- The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary.
JOB REQUIREMENTS
- High School diploma or equivalent.
- Ability to efficiently use a keyboard and quickly navigate software applications.
- High speed internet access.
- Quiet work area without distractions.
- Regular and consistent attendance and adherence to work schedule.
- Knowledge of medical insurance terminology preferred.
- Communication skills (verbal, written, listening).
- Ability to work without frequent supervision.
- Ability to maintain confidentiality in all required situations.
- Ability to use software, hardware, and peripherals related to job responsibilities.
Our client's Database Engineers are relied on to build the future of our direct-to-home service delivery platform. As a part of our Engineering department, based in beautiful Provo, UT, this role requires the ability to move quickly, think deeply and work well with others on your Agile Scrum team: engineers, UX, product owners, and stakeholders. As a Senior Database Engineer, you will engineer, code, and test resilient, highly-scalable database systems that support both our web applications as well as backend APIs for our mobile apps servicing our hundreds of thousands of customers. We are significantly building out the engineering teams at Aptive and are seeking talented coders who love growth to be a part of this expansion.
Responsibilities include:
● Work experience in writing complex SQL queries in MySQL and in building stored procedures and views.● Manage different databases through multiple product lifecycle environments, from development to mission-critical production systems.● Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security.● Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations.● Independently analyze, solve, and correct issues in real-time, providing problem resolution end-to-end.● Refine and automate regular processes, track issues, and document changes● Assist developers with complex query tuning and schema refinement.● Build ER diagrams and help develop and refine DB standards, code reviews, and release processes.● Provide support for critical production systems.● Perform scheduled maintenance and support release deployment activities after hours if required.● Capacity Planning and delivering robust and scalable databases.● Backup & Restore databases, ensuring a well-defined and tested disaster recovery strategy.● Share domain and technical expertise, providing technical mentorship and cross-training to other peers and team members.● Solve technical problems by working closely with Agile scrum masters, UX, product managers, and remote teams.● Review and correct code for quality and design.● Design high-performance database schema and code architecture.● Empower remote teams to deliver stable, high-performing, and reliable code. This requires regular interactions with remote teams in the mornings.● Contribute significantly to sprints, meet sprint deadlines, help other team members with their sprint commitments and take the lead on urgent tasks.Required Qualifications:
● 7+ years of experience writing SQL queries and performance tuning.● Experience required in designing, modeling, and implementing database DDL and DML.● Working knowledge of database and architecture best practices.● Strong written and verbal communication skills.
Benefits:
● Medical, Dental, and Vision Benefits.● Group Health, Dental, and Vision plans.● Paid holidays.● Paid time off.● Access to a full-sized indoor basketball court, game room with theater, pool table, golf simulator, and more.● Upbeat and exciting company culture and much more!We are a US software development company delivering high-quality, cost-effective custom application development to clients worldwide. As a technology consulting company, we also help our clients with their digital transformation process.
Currently, we are seeking a Teach Data Lead:
What You Will Be Doing:- Identifying data sources, both internal and external and working out a plan for data management that is aligned with organizational data strategy.
- Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems.
- Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy.
- Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system.
- Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution.
- Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution.
Your Profile Includes:
- Knowledge of the following data tools: Airflow, Postgre Aurora, Fivetran.
- Experience working with Python, AWS and Apple Search Ads.
- Experience generating date file into internal format using Data Pipeline Infrastructure.
- Ability to implement common data management and reporting technologies, as well as the basics of columnar and NoSQL databases, data visualization, unstructured data, and predictive analytics.
- Understanding of predictive modeling, NLP and text analysis, Machine Learning (Desirable).
Work Breakdown
- Ingestion: Implement the data pipeline from Fivetran source (Postgres Aurora) to internal file generation.
- Load: Implement the data pipeline from the generated internal file during ingestion to loading into the client environment's datastore. Below are the files that need to be loaded.
- Data source tables and related infrastructure preparation.
- Feature implemented as per requirements and as per Engineering Excellence guidelines.
- The implementation must follow documented playbook for integrating media sources through Fivetran.
- All code must pass CICD pipeline, including python linting, black formatting, and 100% test coverage for functional code using patterns.
Your work makes the difference between a user seeing a bus drive away or reaching it just in time. We feel a great sense of responsibility at Citymapper. Millions of users around the world trust our green app to be on time for work, a job interview or a date.
As one of our Data Analysts you will build the timetable data that powers the Citymapper app and help launch new cities. You'll use secret magic (and our internal tools) to transform often messy transit schedules into shiny and reliable user information whilst building, validating, and shipping data directly to users on a daily basis.
Working with our engineers to improve tools and automate tasks, you'll manage data in a range of French and European cities across different formats.
This role is designed for French-speaking contractors based anywhere in the world, but with an obsessive knowledge and adoration for public transport networks.
We are a erse team of transport enthusiasts from all around the world with extensive language skills, who are not afraid to get our hands dirty with transit data. Join us for the ride!Requirements
The position is open to applicants with all levels of experience as we'll teach you the technical skills to succeed. You'll need:
- A true passion for public transport and cities, good understanding of how public transport networks operate.
- A technical mindset, comfortable dealing with data, willingness to learn new data skills.
- A hands-on, proactive, practical, pragmatic attitude.
- An exceptional attention to detail with good organisational skills.
- Some familiarity with common transit data formats or data-wrangling is a plus (GTFS, TransXChange, Siri, Hafas, JSON, XML etc)
- French and English are a must, any other European language a plus.
Benefits
- Contractor position in a remote-first team.
- Working on something interesting and meaningful - helping to make cities usable.
- Working with a not-too-big, erse engineering team.
- Arcane public transport knowledge with which to dazzle your friends.
We believe that erse teams are the best teams and we're proud to be an equal opportunities employer. We welcome and will consider all applications regardless of age, disability, gender re-assignment, marriage, pregnancy, maternity, race or nationality, religion or belief, sex and sexual orientation (and any other status protected by applicable law)
Hi!
We are Genesis Growth Accelerator. We are building a unique model of working with promising B2C IT products: we invest in projects at early stages, scale-up, and help to build successful companies that serve millions worldwide.
Over 100 mln people across the world have already used our products and many more are yet to come.
Our mission is to transform Ukraine from an outsourced hub into a product state. The state, where ideas are born, developed, and owned from the first $1 of revenue to a unicorn IPO.
We are now looking for a Business Data Analyst, who will increase the capitalization of businesses at an early stage.
RESPONSIBILITIES:- Hypothesis generation to create, scale and optimize business growth levers across a wide product range;
- Planning, coordination, and analysis of marketing and product A/B tests;
- Support in the development of analytical solutions for mobile and web products across Genesis Growth Accelerator.
HARD SKILLS:
- Knowledge of SQL/Excel;
- Knowledge of Python (NumPy, Pandas, Matplotlib / Seaborn, functional & OOP principles);
- Understanding the concept (architecture) of DWH class systems;
- Confident application of key statistical and probability theory concepts;
- Basic understanding of key product metrics for mobile applications;
- Advanced level English knowledge.
WOULD BE A PLUS:
- Experience with Tableau/PowerBI;
- Experience with REST API and HTTP API;
- Experience with Google Cloud Platform (Cloud Storage, BigQuery);
- Knowledge of mobile marketing intelligence products including their capabilities and limitations (SimilarWeb, Sensor Tower, Appannie, etc.).
SOFT SKILLS:
- Optimism. You can't wait to see the results of the 10th hypothesis test after the previous 9 have failed;
- Communication. You don't have a problem with telling stories starting from the end.
Join our team of dreamers, doers, and global changemakers!
Who we are:
We're a global marketing service provider and we specialize in affiliate marketing & publishing. We are digital natives, data obsessed and focused on measurable outcomes. Some of the most talented iniduals you'll ever meet, all with one thing in common: doing great work, and growing as a team.
Our value lies in Diversity, Equity, Inclusion and Belonging. We strongly believe in equality and stand against all kinds of discrimination. We dare to be unapologetically ourselves. Come join a team of explorers who are motivated by growth, and driven by results.
What you'll do
The Technical Analyst will report to the Business Intelligence Manager and work closely with marketing, performance optimization, product, business, and development teams to implement tracking for data collection, enhance business intelligence solutions using data.
- Translate business needs to technical specifications for data tracking requirements to help provide actionable insights on affiliate performance.
- Contribute to the continuous improvement/refinement of processes, tools, quality metrics, methodologies and standards with various teams and team members.
- Build and deploy tracking for data collection on client websites based on business goals (via Google Tag Manager, or another tag management tool).
- Evaluate and improve existing BI systems.
What you'll bring
- Degree or equivalent in computer science, web development or related field, or proven equivalent experience.
- Intermediate knowledge of JavaScript, jQuery, CSS, and HTML.
- knowledge of website architecture, including DOM Events, DOM Manipulation, GTM Data Layer.
- knowledge of analytics implementation (such as Google Tag Manager).
- Excellent problem-solving skills and attention to details, in addition to the ability to identify and address problems effectively.
- Strong desire and passion to learn and grow various data-related skills.
What's in it for you
- Relocation & soft landing for you and your family (applicable in Spain, if you are moving from a different city/country)
- Attractive salary
- Competitive private health & life insurance package
- Flexible working hours and remote-friendly tools and methodologies to stay connected
- Workplace perks such as coffee, fresh fruit, bread, sweets and drinks provided daily
- In addition to regular leave yearly, six weeks' paid leave for every four years of service
- Be part of a multicultural environment
- Wellbeing programme
- Different employee events throughout the year and team building activities
- Career Development training and programs to help you grow!
Remote Work From Home Data Entry – (22002874)
Description
- The Data Entry Operator enters data from images into the data capture system.
- Inputs Appropriate data in prescribed format, utilizing basic knowledge of computer software or system.
- Cross References data to ensure accuracy and completeness; scans and edits for errors during entry.
Qualifications
- Posting payments
- Preparation of deposits
- NSF processing
- Credit card processing
- Daily activity balancing
- Maintenance of incoming and outgoing mail
- Entering new business as time permits
Primary Location
: United States-Remote-Remote
Job
: Associate
Organization
: HPHS – Onshore Operations
eVisit is a healthcare tech company that enables healthcare organizations to standup and deliver virtual care in customized clinical workflows. We’re at a very exciting moment - we’ve grown our customer base, we’ve raised $45M last year, and we’re accelerating the maturity of our data and analytics products and services. We’re secure enough to know what we want to do, but young enough that you’ll be able to have a massive impact on the direction, performance, and results of the Data and Analytics team.
We are creating telemedicine technology that’s on the forefront of healthcare innovation. Part of our challenge is using data to prove the impact, and potential opportunities of telemedicine to ourselves, our customers, and the market. This has been a rewarding and challenging process, and we’re continuing to expand. We are looking for an exceptional data analyst who is eager to help us on our mission to simplify healthcare delivery to everyone, everywhere.
About the Team and Job
eVisit’s data and reporting arm of the Product team is responsible for answering key business questions for both clients and internal stakeholders. When telehealth clinical workflows are implemented successfully based on insight we provide, we are able to improve patient experience, clinical quality, provider experience, and the cost of healthcare itself.
Our data and analytics team is a tight-knit, high-performing team that owns the data architecture, data products, and analytics of eVisit. We e deep into customer utilization trends, and draw insights to help drive operational excellence. We’re also developing our enterprise data architecture using AWS technologies.
As a Senior Data Engineer, you would be central to the Data and Analytics team’s success. You would face a variety of challenges ranging from sending automated custom sftp reports to clients, to working hand in hand with a data analyst and architect to bring a new data product to life.
What You Will Do
- Work cross functionally with product and engineering teams to build meaningful data assets based on defined methodologies to capture patient care journeys and episodes of care
- Design data models around a variety of different types of data including: healthcare provider data, patient authorizations and claims, medical images and reports, proprietary quality assessments, and more.
- Build reliable, performant, maintainable, secure systems and pipelines which can scale to the needs of our business.
- Transform complex (and sometimes messy) data from disparate sources into clean, coherent data sets for consumers.
- Expose data interfaces to customers, providers, and internal teams.
Requirements
Your Background
We don’t expect any candidate to have all of the following qualifications and experiences, but a successful candidate will have many of these:
- 2-4 years of professional data experience at a healthcare technology company
- 2+ years of professional experience with building data platforms and/or ETL pipelines
- Excellent SQL and Python chops, including common data packages such as pandas
- Deep familiarity with the landscape of data technologies (tools for ingestion, processing, storage, etc.)
- Demonstrated ability to turn fuzzy data into meaningful structures to be used for analytic purposes.
- Exceptional written and verbal communication skills
- Knowledge of standard medical terminology and clinical data
- Experience with Airflow
- Experience with AWS
Benefits
Location - Where’s the job?
We are 100% remote, and staying that way. We are able to hire colleagues in all 50 states and internationally. Our team was distributed internationally even before the pandemic. It’s baked into how we work. You can work from wherever you want, although this position requires a lot of collaboration so you should plan on being in or near the timezone of your team.
We can’t wait to learn more about you and meet you at eVisit!
At Fors Marsh Group (FMG), we combine the power of science and strategy to improve people's lives. Each day, we work with institutions and organizations that seek to disrupt markets, understand and influence behavior, drive action on a national scale, and create positive impact. Our approach extends far beyond our client portfolio—as a certified B Corporation and a 2020 Greenbook Top 50 Market Research Company, we make a difference in our community through corporate-sponsored employee volunteer programs and pro bono partnerships with values-aligned nonprofits. Most importantly, as a 2019, 2020 and 2021 Top Workplace, we are committed to putting people first and foster a culture that reflects that commitment. We are proud to be an equal opportunity employer, and we celebrate ersity and inclusivity as the foundation of a healthy, successful, and innovative work environment. Join us, and together we can work to ensure a better tomorrow
We are currently seeking an experienced researcher for a senior position on our Military Analytics team. Our Military Analytics team bridges the gap between traditional social science and data science, leveraging innovative analytic tools for research committed to improving the health and well-being of Service members and DoD personnel. We routinely synthesize erse sources of information, including administrative, survey, and text data, distilling complex information for policy-makers and analysts.
This inidual's primary responsibilities will be to provide subject matter and methodological expertise in areas such as machine learning, big data analysis, and text analysis; develop research designs; and oversee the work of multiple research teams conducting quantitative research. This inidual will need to be equal parts data analyst, social scientist, and project manager. This job is best for someone who enjoys solving challenging analytic problems with a large methodological toolkit, has experience extracting insights from large data sets, and thrives in a collaborative environment.
Responsibilities include:
- Applying sophisticated principles in the fields of data science and/or programming to social science research projects
- Serving as a technical lead on research projects with a data science focus
- Working with large, complex quantitative data sets to aggregate, organize, and explore data assets through a variety of techniques.
- Analyzing data and interpreting results from descriptive and inferential analyses to identify patterns and solutions. Training/applied experience with multivariate modeling, dimension reduction, and predictive analytics.
- Develop innovative, transparent, and reproducible systems to facilitate social and behavioral sciences (SBS) best practices
- Preparing technical reports, presentations, and executive summaries for analyst and non-analyst audiences, written proposals, and other internal or external communications summarizing research methods, findings, and implications.
- Managing multiple concurrent projects, including providing quality control and implementing financial controls for projects.
- Directly interfacing with team members and clients to understand their needs, manage their expectations, respond to ad hoc requests, and communicate the most pertinent results to them in a way that is useful and easy to understand.
- Managing and supporting a mixed team of analysts and researchers, including setting goals, cultivating a productive, growth-orient work environment, and team development
Qualifications:
- Master's degree (PhD preferred) in a social science field and/or data science
- A minimum of four years of post-graduate applied experience leading research projects, preferably through the use of mixed-methods approaches
- Ability to work effectively as a team leader, team member, or independently
- Demonstrated experience writing syntax in R (additional skill in other languages such as Stata, SAS, SQL and Python desired but not required)
- Demonstrated experience working with erse data sources, including survey, personnel, administrative, and text data.
- Demonstrated experience of machine learning algorithms (e.g., decision trees, k-means, random forests, SVM) and their practical uses and limitations
- Demonstrated experience with natural language processing techniques and interpretation of results
- Experience working with existing datasets and integrating from multiple sources (SQL experience a plus)
- Excellent verbal and written communications skills
- Willingness and ability to learn new research topic areas, methods, and analytic approaches.
- Demonstrated ability to lead high-performing research teams
- Applicants must be comfortable working with sensitive topics such as sexual assault and suicide ideations
- Applicants may be subject to a low-level government security investigation and must meet eligibility criteria for access to sensitive information
- US Citizenship required
- FMG requires all new hires to be fully vaccinated against COVID-19 in order to start employment, unless that person has requested and FMG has granted a medical or religious exemption during the onboarding process.
We Offer:
Our benefits typically meet or exceed our competitors' packages. Ways we are unique?
- Top-tier health, dental, vision, and long and short-term disability coverage all covered at 100% for employee coverage
- Remote work
- Our company culture, which values balance. We work around our personal realities while always accomplishing what's expected of us
- We provide a Personalized PTO Program designed to make sure that employees take the leave they need when they need it.
- Generous matching retirement contributions and no vesting period in your third month of employment
- Dedicated training and development budgets to expand your expertise and grow your skillset
- You can volunteer your way with paid time off
- You can participate in FMG staff-led affinity groups
- Our employees receive product and service discounts through the certified B Corp network
- Position type: full-time, indefinite contract
- Seniority: Senior Software Engineer (inidual contributor)
- Location: Remote (desired time zone between UTC-3 and UTC+3)
- Compensation: 60.000 - 65.000 EUR/year + stock options (both based on seniority level) + benefits
- Benefits: fully remote work & flexible hours; 37 days/year of vacation & holidays paid time off; sick days; health insurance allowance; company-provided equipment, remote work allowance & equipment allowance; company-sponsored in-person events; great erse & inclusive people-first culture.
< class="h1" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;"> < class="h1" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;">About the role
As a Senior Software Engineer of Data Retrieval at Athenian you can expect to have a big impact in shaping the product.
You will have the opportunity to work alongside our highly skilled team to design, build, and iterate on a world-class software web application.
You are expected to contribute to the Data Retrieval part of the backend. The Data Retrieval involves fetching, updating, and archiving all the data related to different data sources in real-time (Github, JIRA, CI/CD services, etc.). It is structured as a graph with nodes performing different tasks, implemented either as workers on Kubernetes or as Google Cloud Functions, exchanging messages through edges, implemented with Google Pub/Sub.
We are developers building a product for other developers and we build our product with a sense of pride and ownership. You will be in a collaborative environment where you will work closely together with product and engineering to understand user needs, and discuss new ideas to solve complex problems.
< class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-top: 56px; margin-bottom: 8px; color: #121343;"> < class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;">Responsibilities-
Be part of the Data Retrieval team to tackle the current and new challenges to be ready to handle the expected growth
-
Closely collaborate with the API and the Devops team as part of the Backend
-
Understand customers’ needs and propose ideas and discuss solutions innovating with the team on engineering and product.
-
Full professional proficiency in English, written and spoken. The ability to communicate comes first, no matter the level of technical skills.
-
Strong experience with any of the following: Go, Java, C#, C++, Rust, Ruby, Typescript (Node), Python (with typing).
-
Willing to work in Go.
-
Strong experience with PostgreSQL.
-
Strong experience with Linux.
-
Strong knowledge of Git tools and concepts.
-
Experience with different APIs.
-
Experience with event-driven backend architectures.
-
Experience with Continuous Integration and Continuous Delivery.
-
Experience with scalable backend design: distributed processing, load balancing, fault tolerance, etc.
-
Knowledge of Docker, Kubernetes.
-
Familiarity with Google Cloud Platform or similar.
- Strong experience with Go.
-
Strong experience with Google Cloud Platform (Cloud Functions, Cloud Run, Pub/Sub)
-
Knowledge of C/C++ or Rust.
-
Knowledge of Python.
-
Experience with columnar DBs like Clickhouse, Druid.
-
Experience with distributed SQL databases.
-
Experience with Terraform.
-
Experience with monitoring and alerting.
-
Experience with GitHub Actions, Circle CI, and Jenkins.
-
Having worked remotely.
-
Having worked in a dynamic start-up environment.
-
Having worked on a SaaS product.
-
Having used modern collaboration tooling (Jira, GitHub, Slack, Zoom, etc.).
-
Responsible and professional
-
Independent, goal-oriented, proactive attitude
-
Disciplined and communicative in remote environments
-
Collaborative and with a strong team-spirit
-
Curious and interested in learning new things
< class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-top: 56px; margin-bottom: 8px; color: #121343;">Hiring process
The hiring process is composed by multiple steps:
-
CV review
-
Screening Call
-
Technical Assessment project
-
Technical Interview + Q&A
-
Architecture interview + Manager Interview
-
Communication of the outcome
At Athenian Engineering we are currently a team of 8, including 1 team lead, the Head of Analytics, and the Head of Engineering. Everyone is a world-class Senior Engineer, each with a erse area of expertise ranging from Language Analysis and System Architecture to Machine Learning on Code and modern APIs, and to modern Web Applications.
We collaborate with each other on a daily basis and we value each contribution and idea. We foster good collaboration through transparency and good communication, and we believe that teamwork is key to move fast and be successful.
-
We are inclusive and welcome ersity, we encourage applicants from all backgrounds to apply.
-
Athenian is a fully remote company. At the moment, we are 20+ people from several different countries working closely together in a fully-distributed way.
-
We put a lot of value into collaboration and feedback, no matter if it comes from our CEO, a customer, Product or Engineering because we know that the best ideas can come from anywhere.
-
We believe in transparency and collaboration, which reflects how we operate internally and externally.
-
We are humane and care about each other's growth and wellbeing.
-
Flexible hours, set your own schedule that fits you.
Headquartered in Los Angeles, Criteria Corp is a technology company dedicated to changing the way companies find and hire great talent. That's why we develop fair, objective and innovative assessment products to inform effective people decisions. Over 4,100 companies currently use our Criteria products and we are growing quickly. We apply that same dedication when it comes to hiring our own team. We owe our growth and success to a passionate team of iniduals working together to achieve a common goal.
POSITION SUMMARYThe Senior Data Engineer is responsible for expanding and optimizing the functionality of our Data Services Platform by contributing to its architecture and development enabling Data Analysts and Data Scientists to operate more effectively. The Senior Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
REQUIRED KNOWLEDGE/SKILLS/ABILITIES
To be successful in this role the incumbent will demonstrate the following:
- 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Passion for data, automation, analytics and understanding of new technologies and approaches to resolving problems at scale
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- Deep understanding of SaaS applications.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong project management and organizational skills.
- Working knowledge of modern object-oriented and functional programming languages.
RESPONSIBILITIES
The primary responsibilities of this role include:
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Architect and develop scalable and efficient data analytics patterns that can be reused across common use-cases.
- Bring new ideas from concept to implementation, write quality, testable code, and participate in design/development discussions.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, and SaaS Platform teams to assist with data-related technical issues and support their data infrastructure needs.
- Create tooling to enable Data Analysts and Data Scientists team members to effectively consume and utilize the platform you will be managing.
- Thinking laterally to reduce operational costs and overheads of the platform.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions and working with internal auditing and compliance teams to ensure appropriate guard rails are in place.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Engage with internal and external stakeholders to gather requirements and opportunities to improve the Data Services Platform
- Developing processes and guidelines that enable the delegation of control of aspects of the platform to technically proficient users.
- Other leadership functions as required.
Team Leadership/Management
- Actively lead and coach direct reports to ensure they are fully aware of and supported to execute work requirements.
- Provide appropriate technical training and continuing professional development for all team members.
At Turquoise Health, we're making healthcare pricing simpler, more transparent, and lower cost for everyone. Have you or a family member ever gotten an MRI, a lab, or even a straightforward surgery without knowing the cost in advance? That's bonkers, right? We're working to fix that.
We’ve already launched our consumer-facing website that allows anyone to search and compare hospital insurance rates; something once impossible. Now, we're rolling out a suite of new products for providers, payers, employers, and patients to clean up the healthcare transaction. We want to change how the industry works from the inside out.
We're a Series A startup backed by top VCs a16z, Box Group, Bessemer Venture Partners, and Tiger Global. Most importantly, we're a multi-talented group of folks (moonlighting as authors, bass players, improv instructors, chefs, linguists, and trivia buffs) with a passion for improving healthcare. We're eager to find ambitious and well-rounded teammates to join us on this mission.
< class="h3">The Role
As a Data Strategy Lead you will become a trusted advisor to payers and providers on all things healthcare pricing data. As an expert in our Hospital Rates Database and Payer Rates Data Warehouse, you will help customers derive value to power negotiations and patient pricing strategies. You’ll work to create happy data customers, expand product adoption, and inform our data product roadmap.
You should be innovative, analytical, and resourceful, with the ability to tackle unique problems quickly. You will support projects across multiple clients, so you must be someone who can manage your time efficiently across multiple projects. This role is a mixture of technical data work in SQL (bonus for proficiency in python or R), healthcare subject matter expertise, and building strong customer relationships. As a result, the role will focus on driving active usage as a post-sales measurement of success. < class="h3"> < class="h3">Responsibilities- Partner with customers to understand their data and reporting requirements and translate them to engineering team
- Create and participate in customer education through various channels (email, blog, webinar)
- Provide comprehensive day-to-day analytics support to customers, develop tools and resources to empower data access and self-service so your expertise can be scaled
- Perform ad hoc analysis, insight requests, and data extractions to resolve critical business and infrastructure issues for our customers
- Assist customers with migrating our data to their platforms and tools (often, into a local environment that differs quite a bit from ours)
- Work closely with the product team to ensure customer feedback drives product development
- Work closely with technical partners on our data engineering team on designing and developing robust data structures and highly reliable data pipelines
- Actively solicit and prioritizing technical feedback via customer engagement
Requirements
- Excellent written communication and interpersonal skills - an equal desire to work with customers and see them succeed vs. to spend self-directed time with data
- 3+ years of analytics experience in SQL + a popular data visualization stack (Tableau, PowerBI, Looker, or others)
- 3+ years experience working in healthcare data (revenue cycle/claims, patient access, quality data, or interoperability, as examples)
- Strong analytical, problem-solving, mathematical, and creative thinking skills
- Strong technical intuition and ability to understand complex business systems
- Expert knowledge in data modeling concepts and implementation
- Fluency in Excel (no matter what, it always comes in handy)
- Hands on experience in processing extremely large data sets
- Expertise in visualization technologies including Looker, Tableau, and others
- Desire to work in a fast-paced, data-intensive environment
Benefits
- Stellar Health Care Plan options (Medical, Dental & Vision)
- Unlimited Paid Time Off
- 401K + Matching
- Family Leave (Maternity, Paternity)
- Work From Home + Paid Remote Co-working Weeks
- Stock Option Plan
- Monthly Wellness Benefit
- Annual Learning & Development Benefit
- Company-provided equipment (Laptop, Mouse, Keyboard, Monitor)
Job Location
Turquoise Health is a fully remote company based in the US. Our founding team is located in California (San Diego) and London. We work with team members and contractors in the US and around the world, but we operate on US business hours and work with clients entirely based in the US.
For this role, we are seeking US-based candidates.
As a Data Entry Operator II, you will directly affect Veterans’ ability to access their hard-earned benefits by completing necessary tasks that digitize critical documents required for pension, disability, and other benefits approval and disbursement. There’s no job too small when it affects our nation’s veterans.
Starting wage of up to $20.32 per hour (hourly wage, including fringe benefits).
Scheduled Hours:
Monday-Friday: 12pm-8:30pm/9am-5:30pm we also have flexible schedules
Saturdays: Scheduled based on production need.Training hours:
Onsite Monday-Friday: 7:30am-11:30am until government clearance and access is obtained to work remotely.
First day/training would begin on Friday, August 26th.Purpose: The Data Entry Operator II operates a computer to transcribe data into a format suitable for computer processing. This position requires the application of experience and judgment in selecting procedures to be followed, and searching for interpreting, selecting, or coding items to be entered from a variety of document sources.
*This is a work from home position. You can work from anywhere
Essential Duties and Responsibilities:
- Logs into a computer and accesses work queues to review computer images of documents and code documents using an established list of codes.
- Verify, if required, whether previously extracted information is correct and make corrections to previously extracted information as needed.
- Adjust orientation or lighting of documents.
- Reference work instructions as needed.
- Required to attend mandatory meetings and trainings, work scheduled overtime with minimal notice, and perform other duties as assigned per business needs.
Qualification Requirements –
To perform the job successfully, an inidual should demonstrate the following:
- Must be at least 18 years of age.
- Able to read, write and speak English.
- Able to maintain confidential information.
- Successfully pass and maintain acceptable background checks and security clearances.
- Basic computer knowledge.
- Able to type 8,000 keystrokes per hour.
Equal Opportunity Employer/Protected Veterans/Iniduals with Disabilities
Riverflex is looking for a pragmatic & hands-on Senior Data Transformation Lead, based in the Netherlands, who will be able to develop and drive the transformation of the client's Data Differentiation Platform team.
Our client is revising their data strategy and operating model and are transitioning into a Data as a Product architecture and operating model.
What you will be doing as the Tech Transformation Consultant:
- Share and discuss the overall operating model to a team of the tech organization (300-400 employees) to ensure they are aware / understand the new operating model
- Develop a tailored transformation plan to apply the operating model into the Data Differentiation Platform team
- Develop and implement new ways of working for the operating model
- Identify other transformation gaps and make recommendations
- Drive changes as a “transformation lead” project managing, tracking, following-up, managing risks and issues and ensuring delivery
- Implement data (quality) management and governance and translate theoretical policies into pragmatic enforceable iterative steps
- Collect, aggregate & align existing data governance policies and requirements across the organization
- Decommission existing data platforms currently in place and transition into the new operating model & architecture
What you will need to succeed:
- 7+ years of experience in leading, developing, and structuring large-scale data transformations as a transformation program manager (or a similar role)
- Consulting and/or Change management background in data strategy, preferably in a top-tier environment
- Experience in agile delivery / agile transformation and understanding of modern tech ways of working (DevOps / Cloud / Agile / outcome-based planning)
- Well-versed in best practices of data governance & data management (e.g. DMBOK expertise) and experience with a pragmatic approach to implementing data governance/data management processes
- Experience in data migration from a legacy platform
- Ability to overcome resistance to change, have a driving mentality and highly persuasive in nature
- Entrepreneurial mindset and looking to work in a small team and make processes and activities scalable
- Ability to perform in a fast-paced environment
- Independence and the ability to be self-motivated
- Already residing in the Netherlands
About Riverflex Consulting
Riverflex is a global collective of consultants united by a mission to drive the sharpest edge of business.
We harness the true power of open talent by assembling high-performing teams of top-tier independent professionals to drive digital growth and innovation for our clients.
So far, we’ve supported over 20 world-leading brands such as Nestle, Ahold Delhaize, IKEA, Samsung, Fenix Outdoor, and PVH with digital consulting, technology, data, and talent services.
The Riverflex team is a group of multi-disciplinary and erse professionals that come from every corner of the world: Portugal, Hong Kong, Turkey, South Africa, Pakistan, Germany... You name it! We have office locations in Amsterdam, Barcelona, London, and Istanbul and are always looking for opportunities to expand our vision.
Plato is on a mission to empower engineering and product teams to unlock their full potential. Using our proprietary Talent Growth Platform, we connect engineering and product professionals with industry experts for personalized talent coaching, powerful 1-1 mentorship sessions with our incredibly mentor community, and networking opportunities that drive personal and professional growth.
We're backed by a group of impressive advisors and investors including SaaStr, Y Combinator, the Slack Fund, S28 Capital, Eric Yuan (Zoom), Mathilde Collin (Front), Andrew Miklas (PagerDuty), and many more!
About the Lead Data Engineer Role
-
In this role, you'll have the opportunity to join us as an early member of the Data Engineering team. You'll be setting direction, choosing tooling and designing the framework Plato will build on. This is a hands-on role to start but you will be building a team that will be responsible for driving data priorities forward and making a large-scale impact across Plato
What You’ll Do:
-
Work independently to drive forward Plato's data priorities as the founding member of the company's data team (within the engineering team)
-
Advise and select tooling for our ETL and reverse ETL platforms. Design, develop and maintain ETL platforms for various business use cases which are fault-tolerant, highly distributed, and robust.
-
Define and execute best practices for no-code data exploration within PlatoWork on structured, semi-structured data to put company data to business use.
-
Analyze large sets of structured and semi-structured data for business analytics and design tooling for non-engineers.
-
Work with technical and non-technical team members to advise on data.
-
Integrate new sources of data required by Plato while ensuring best practices are met.
-
Fully own and manage the data lake and all of the data and structure within it
What We’re Looking For
-
7+ years experience in handling data pipelines, data warehouses and data lakes.
-
You understand large data sets and their application to business.
-
You're comfortable as the data expert at a company and explaining the business significance throughout the schema.
-
You have the ability to ingest large amounts of SQL and NoSQL data and ensure that the data is beautifully structured and formatted in the data warehouse in such a way that is consistent and easy to utilize and leverage and understand.
-
You have experience as a team-lead, setting strategy and mentoring others.
-
You take large and complex projects and create clear prioritization and estimations around each task.
-
You have experience with Airbyte, Segment or similar tools
-
You enjoy working cross-functionally to answer questions and adviseYou're excited about building a scrappy, fast-moving tech startup!
< class="h1">Why Choose Plato
At Plato, you’ll be given the opportunity to contribute to something truly meaningful that positively impacts thousands of people around the world. Aligning with our mission, we believe that by investing in our team members’ personal and professional development, we can unlock Plato’s full potential and build a thriving work environment for the greater tech community. Even a 10% improvement here will lead to a ripple effect that will benefit our community.
We spend nearly all of our waking lives at work, let’s make it better! Here are some of the ways we do that at Plato:
-
Work from anywhere: On our fully-remote team, you can work from anywhere in the world as long as you have 3-4 hours of PT overlap for non-customer facing roles and 5 hours PT overlap for customer-facing roles.
-
Unlimited responsible time off: Take time away to do what you love and recharge with unlimited responsible time off.
-
Competitive compensation and opportunity for advancement: Grow within your role or try something new with opportunities for advancement within Plato.
-
Comprehensive benefits package: Medical, dental, and vision coverage to keep you happy and healthy.
-
In-person team building activities: We bring our team members together for regular in-person events in awesome locations like the Metaverse, France, Mexico, New Orleans, California, and Spain to name a few!
-
Work with amazing companies: Hundreds of top technology companies have chosen Plato to strengthen their teams including DocuSign, Box, Segment, Rakuten, SurveyMonkey, and Betterment.
-
A erse team from around the world: Work and learn from a group of erse team members from around the world including the United States, Canada, France, Spain, South Africa, Germany, Poland, Switzerland, India, Brazil, and more.
Plato is an equal opportunity employer that is committed to inclusion and ersity in the workplace. All qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, age, nationality, disability, protected veteran status, gender identify, or any other factor protected by applicable federal, state, or local laws.
Learn more about your equal employment opportunity (EEO) rights as an applicant here.
Plato is committed to working with and providing reasonable accommodations to iniduals with disabilities. If you need reasonable accommodation because of a disability for any part of the employment process, please provide us with additional information on the nature of your request.
Would you like to join a leading Digital Solution Company and contribute to innovative solutions that are built on a daily basis?
Our client is a leading Digital Solution Company for Business IT Solutions and digital transformation. Founded in 1996, our client's company started off as an IT consulting partner for one of the most recognized brands in the agricultural equipment manufacturing industry. They provide a comprehensive portfolio of services and solutions that not only solve today’s IT challenges but also address tomorrow’s business priorities.
The Microsoft Service Line is looking for a Data Modeler who thrives on challenges and has the desire to make a real difference in the world of Business. This is an exciting opportunity for self-starters who are passionate about getting things done, think strategically and out-of-the-box, and are committed to driving excellence.
Start your day with the flexible morning hours and:
- Develop cloud solution design for Enterprise customers
- Utilize knowledge of best practices to building highly scalable, robust, secure and sustainable solutions using Azure Platform.
- Transform business requirements into modern data models and implement them in SQL DB’s
- Estimate technical requirements and breaks down work to user stories and tasks
- Leading project teams in delivering intelligent cloud applications
- Mentor team members in both technical and process-related areas
- Design and work on proof of concepts that can be demonstrated to the customer
- Become customer's trusted technical advisor and subject-matter expert
- Lead client engagements and present solutions to both technical and business stakeholders and facilitate strategic discussions.
- Stay updated on Azure services, and contribute towards capability, competency building on Azure
- Understand Enterprise Application design framework and processes
- Establish cloud best practices and review code.
We are happy to hear from you if you have:
- A minimum of 6+ years’ experience in data architects to design databases that meet organizational needs using conceptual, logical, and physical data models.
- Experience in design and implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL).
- Good knowledge of metadata management, data modeling, and related tools (Visio, Erwin or ER Studio or others) required.
- Experience in designing cloud solutions for scalability, high availability in Azure/AWS preferred.
- ETL, data integration and data migration design experience.
- Scripting experience in any one of the following: SQL, Python, pyspark or SCALA
You will love to join this company for:
- B2B contract
- All the equipment needed for remote work will be provided by the client
- Competitive package in line with the best market standards
- Fully remote work
- Work-life balance
- Agile work environment
"The YMCA of San Diego County is the leading nonprofit committed to strengthening iniduals and communities across the country. Every day, we help people connect to their purpose, potential and each other. Working locally, we focus on supporting young people, improving health and well–being and inspiring action in and across communities." The YMCA of San Diego County is proud to be an Equal Opportunity Employer/Affirmative Action Employer Minority/Female/Disability/Vets. We are committed to a erse workforce
< class="h3">Job Description
- Maintains database by entering new and updated customer and account information.
- Prepares source data for computer entry by compiling and sorting information.
- Establishes entry priorities.
- Processes customer and account source documents by reviewing data for deficiencies.
- Resolves deficiencies by using standard procedures or returning incomplete documents to the team leader for resolution.
- Enters customer and account data by inputting alphabetic and numeric information on keyboard or optical scanner according to screen format.
- Maintains data entry requirements by following data program techniques and procedures.
- Verifies entered customer and account data by reviewing, correcting, deleting, or reentering data.
- Combines data from both systems when account information is incomplete.
- Purges files to eliminate duplication of data.
- Tests customer and account system changes and upgrades by inputting new data.
- Secures information by completing data base backups.
- Maintains operations by following policies and procedures and reporting needed changes.
- Maintains customer confidence and protects operations by keeping information confidential.
- Contributes to team effort by accomplishing related results as needed.
- Organization skills
- Quick typing skills
- Attention to detail
- Computer savvy
- Confidentiality
- Thoroughness
- Coordinate facilities and furnishings maintenance including assembling furniture
- Effectively and promptly communicate building and maintenance related issues to supervisor and Department Head
- Assist in the maintenance of clean, attractive, safe and well-repaired facilities
- Understand the approved use of various chemicals and cleaning agents and be able to use and follow all MSDS materials/precautions and directions associated with them
- Conducts unit inspections as required by the program policies and guidelines
- Assist in all necessary repairs and upkeep of program properties, including painting, fire safety, changing door knobs, key copies etc
- Monitor YMCA vehicle maintenance schedule and other vehicle documentation, performs vehicle inspections, reports issues to supervisor and Department Head, and coordinate vehicle repairs with approved vendors
- Oversee safety program and inform management of any unsafe conditions
- Follow all fire, health, licensing, and YMCA related standards and property lease requirements
- Must be accessible to facility at all times and carry facility cell phone on and off duty
- Assist and coordinate intakes and departures
- Oversee delivery of consistent services; work with clients, public/private organizations and program staff
- Assess need for and assist in the coordination of all facility renovations and upgrades
- Perform preventative maintenance on equipment throughout the facility
- Repair and/or replace equipment as needed
- Teach Independent Living Skills and provide support to participants in relational wellness
- Provide crises intervention and mediation to participants
- Communicate effectively with Department Head, co-workers, participants, families, and other service providers
- Provide a secondary on-call option during specified times
- Ensure participants maintain safe and adequate living environment
- Work independently and as a team member to create innovative ways to meet contract goals and provide a high level of service that is engaging and meets the needs of the participants
- Transport participants in YMCA vehicle as needed
- Practice non-aggressive defensive driving techniques
- Operate YMCA vehicles in safe manner
- Vehicle use is for YMCA business only
- Perform other administrative tasks as assigned and assist with maintaining proper document storage and auditing
- Participate in and attend all required staff meetings, trainings, YMCA staff development events and appropriate agency-wide committees
- Ability to work effectively with others in alignment with the YMCA 4 Core Values
- Models the 4 Core Values in all aspects of position responsibilities
The YMCA of San Diego County is proud to be an Equal Opportunity Employer/Affirmative Action Employer Minority/Female/Disability/Vets. We are committed to a erse workforce.
Would you like to join a leading Digital Solution Company and contribute to innovative solutions that are built on a daily basis?
Our client is a leading Digital Solution Company for Business IT Solutions and digital transformation. Founded in 1996, our client's company started off as an IT consulting partner for one of the most recognized brands in the agricultural equipment manufacturing industry. They provide a comprehensive portfolio of services and solutions that not only solve today’s IT challenges but also address tomorrow’s business priorities.
Start your day with the flexible morning hours and:
- Find solutions to business challenges by evaluating emerging technologies and the evolution of current technologies
- Participate in deep architectural and requirement analysis discussions to build client confidence and ensure success in building new and migrating existing applications, software and services onto the Data Platform
- Balance desire for clean architecture design with pragmatism on delivery, including consideration of minimization of technical debt
- Define a future vision roadmap for optimal applications of technology to meet business needs
- Evolve, direct and govern the development of frameworks, standards, policies, principles and procedures that guide technology decisions and maximize reuse of technology
- Chair solution forums, ensuring adherence to architectural standards, and consistency across organizational and design boundaries
- Develop the component, integration and technical architecture for project and platform initiatives, and collaborates with project teams to realize them
- Establish credibility & build deep relationships with senior technical staff to generate common understanding and relate their challenges to platform development strategy
- Contribute to strategic plans for technology, which satisfy the current and ongoing needs of the firm's business strategy, and the current and future capabilities of technology
We are happy to hear from you if you have:
- Deep knowledge, hands-on experience and conceptual/architectural understanding of many flavors of cloud technologies
- Experience in designing for microservices and event-driven architectures
- Ability to bridge the gap between business needs/direction and the possibilities and constraints surrounding technical implementation
- Ability to work as part of a number of global project teams providing architectural direction
- Ability to assist business partners to define and prioritize their requirements
- Ability to influence, manage and facilitate change, within internal teams and externally in matrix environment
- Ability to think conceptually and pragmatically to determine feasible solutions for implementation
- Strong interpersonal skills to resolve problems professionally, lead groups, negotiate and create consensus
- Practical experience of developing secure solutions based on major cloud vendors, such as Azure
- Ability to lead or conduct technology evaluations and POCs against specific objectives
- Proven experience working as a lead developer and architect
You will love to join this company for:
- B2B contract
- All the equipment needed for remote work will be provided by the client
- Competitive package in line with the best market standards
- Fully remote work
- Work-life balance
- Agile work environment
About Tymit
At Tymit our mission is to reinvent the credit card to make it honest, safe, and more transparent to achieve more peace of mind when managing your finances. To prove this to our users, we have built an elegant and intuitive app linked to a credit card designed to achieve better financial well-being and control like never before.
A credit card from a tech company, not a Bank. Fintech credit makes up only a tiny fraction of overall credit. Still, it is growing rapidly, and Tymit will be at the forefront of the digital disruption in the credit card world.
Based in the UK, Spain, and Greece, our talented and growing team is committed to creating a erse and fun work environment, putting the user at the center of every decision we make. At Tymit, we have a mantra. Customers are first because that's what drives us and our growth, and we'll continue catering to them that way.
Compensation & Perks
💰 The salary range of this role is £70,000 – £80,000 per annum depending on experience.🏝 25 days of paid holiday plus bank holidays.🥳 Your birthday off.🏡 Tymit is working fully remote, and all employees can remain working from home if they want, traveling to our offices when agreed with the team.🌇 If your preference is working from our office and working onsite with other Tymiteers, we have you covered with our offices - Postpandemic.⏰ Flexible working hours.🤓 Additional days to attend conferences and workshops when related to technology and/or events related to Tymit’s industry and sector.💲 Budget for remote home expenses.
📚 Learning & Development is a big asset for us, and we count with a yearly budget.
🏥 Private health insurance.
👩🏽🤝👩🏻 Referral program.
💳 6 months free-interest for all Tymiteers in our Tymit card.
💻 Budget for your home office set up.
About the role
We are looking for a Data Lead to join Tymit to set the strategy for our data function and implement it technically. This is a new role at Tymit and will be responsible for designing the new data architecture and bringing it to life.
The data lead is responsible for performing the analysis of the requirements in order to build the new data architecture following modern data architecture designs and hands-on coding to bring to Tymit a brand-new data warehouse and data lake solution which will allow analysts a better source to obtain knowledge from data.
Partner closely with C-level leadership to define and expand this role's product, business, and technical strategy.
Will help to build a data team consisting of data engineers and data analysts as we look to scale through 2022.
We are small, but we think big. 🦄🦄 Do you want to join us?
About the culture
Be ready to join a company where collaboration is key. You will be expected to share knowledge and speak up, at the same time you have the ability to listen to others and consider their opinions. At Tymit we are also big fans of good atmosphere and humor, prepare yourself to receive a bunch of GIFs.
Analytics team at Tymit work by squads in cross-functional teams applying good practices and agile methodologies. Within the team you will find Melanie and Fredrik.
During your time at Tymit you will
- Be responsible for the Data decision-making process as well as providing innovation, leadership, and influence in the design of Tymit's Engineering processes.
- Balance technical and product decisions, keeping our customers at the heart of them.
- Responsible for designing “Modern Data Architecture designs” that will be in harmonization with the ecosystem of our actual cloud provider.
- Be a “do-er”, and specially will have to train technically other Data team members in other to acquire the technical skills required to develop and maintain the data pipelines.
- Asses the hands-on job don in order to ensure the data architecture is on the right direction.
- Continuously assess the roadmap and vision of the Data architecture in order to include improvements along all the way from the data transformation to the data exploitation.
- Define how we use data to understand customers and improve our product.
- Raise the bar for the entire data function through best practice, tooling, and automation.
- In conjunction with Platform team will contribute to continuously developing the data architecture in AWS Cloud.
- Assign responsibilities and manage the workload of the team under your supervision.
- Embrace and promote the Tymit values and those behaviors that we feel need to be recognized and rewarded, or those situations in which we need to provide feedback.
- As part of a regulated entity, you will conduct your responsibilities with the highest standards of transparency and commit to ensuring the right levels of compliance when required. You will also need to align with the principles of sound and prudent conduct on the affairs of an authorized institution.
- Discuss with each team member their mid-and long-term opportunities in Tymit, career path, and expectations, and keep track of areas or positions that might be at risk.
- Be involved in the definition of the different roles under your supervision, participate in all the hiring processes including the offers with candidates, and in the definition of the performance criteria.
- Behaves as a facilitator between teams people to mediate on communication or requirements problems.
- Provide feedback and collect feedback from the team members that report to you by organizing a dedicated 1on1 meeting.
- Coordinate team holidays with Technical Line manager, and Head of Data if required.
Key technologies we use at Tymit
Analytics team: in your day-to-day you will be working with technologies such as AWS, SQL and Metabase.
Tymit tech team: we work with technologies such as Kotlin, SwiftUI, Swift, Java, Groovy, Grails, Quarkus, NodeJS, React, Python, RabbitMQ, Postgres, AWS, ELK, Docker, Docker Swarm, Terraform, Ansible, and Jenkins.
We are always open to adopting new tools to solve complex problems, so we’re ready to discuss options with anyone with the right experience.
What you will bring along
- Experience capturing data requirements from business and technology, and translating those to the vision of driving well-architected data solutions within AWS.
- Good experience using data integration technologies like AWS Glue or Redshift.
- Good hands-on experience with Python language.
- Expertise in working with data sources of different nature (SQL, NoSQL, Key-value, CSVs, etc.) and designing solutions and their roadmap to transform that raw data into knowledge and valuable information.
- High experience working with heterogeneous engineering and product teams which require gluing together requirements and solutions.
- Able to handle the ambiguity of a startup to figure out what needs to be done and do it.
- Experience with analytics tools like Looker, Metabase or AWS Quicksight would be great.
What you can expect from our hiring process
Stage 1
45 min. video call with Carla, part of Tymit’s People team. Understand your career plan and what motivates you about Tymit.Stage 260 min. video-call with Fran, Platform Lead, and Melanie, Data Analyst. Brief technical discussion around a task to better understand your skills and give you a sneak peek of what it could be working at Tymit.Stage 390 min. video-call with Fredrik, Data Engineer, and Héctor, Platform Engineer. Technical discussion around a task to better understand your skills and give you a sneak peek of what it could be working at Tymit.Stage 4
60 min. video-call with two people from our Engineering and Product team. Introduction to your future team to get a sense of Tymit’s culture.
Stage 5🏁– Offer 📧If you have any disability, please let us know whether there are any adjustments we can make for our process to be more inclusive.
To meet our regulatory obligations as a licensed financial services company in the UK, Tymit needs to take background checks, Criminal and Credit checks, our new hires to help us safeguard our users. If you have any concerns regarding this process, please discuss this with our People Team.
Tymit is made up of people from various backgrounds, and you are welcome for who you are, no matter where you come from, what you look like. We seek to create a culture where everyone can belong because we believe that people do their best work to show up every day as their authentic selves. So, bring us your personal experience, your perspectives, and your background.
We do not make hiring or employment decisions based on race, religion, age, national origin, gender, gender identity or expression, sexual orientation, marital status, disability, pregnancy status, or any other difference. If you have any disability, please let us know whether there are any adjustments we can make for our process to be more inclusive.
*Please, take into account that we do not provide Visa sponsorships, and you need to have right to work in the countries where we are located.
Carry1st is the leading publisher of mobile games in Africa. We work with studios across the globe to level up their games and scale in dynamic, frontier markets. In addition, Carry1st has built a proprietary payments engine, Pay1st, and ecommerce platform, the Carry1st Shop, which allows customers to pay for digital content leveraging local payment methods.
As a Data Analyst for the Platform, you will work with Carry1st’s product and growth teams to better understand and serve our customers. You will build analytics and reporting solutions to translate data into actionable insights for both the Carry1st Shop - our B2C ecommerce platform - and Pay1st, our aggregated payments gateway.
Requirements
You will…
- Improve our ability to attract, retain, and serve our customers by making sense of large amounts of behavior, transactional, and user generated data
- Uncover actionable insights to feed into product and growth roadmaps
- Support our digital marketing efforts by measuring campaign performance for our teams
- Empower product leadership to make quantitatively informed, evidence-based decisions on new product features to introduce to our customers
What makes you a great candidate?
- 3-5 years of industry experience in a Business Intelligence role
- Masters degree in Computer Science, Statistics, Mathematics, Engineering or relevant field
- Advanced proficiency in SQL, Data Modeling and working with Big Data
- Well-versed with data visualization and reporting, e.g. Tableau, PowerBI, or similar
- Experience working with Google Analytics
- Familiar with data modeling and database schema design
- Experience working in a fast-paced, entrepreneurial environment
Benefits
Carry1st is a fast-growing, dynamic place to work. In December 2021, we closed on a large fundraise led by Andreessen Horowitz - with participation from other awesome investors like Google, Riot Games, Konvoy Ventures, Avenir, and Nas! It’s allowed us to scale our team and our ambitions. And while we are laser focused on building the flagship mobile internet company in Africa, our team is fully remote and 100% global - with people in 20+ countries!
At Carry1st, you will…
- Build awesome, industry-changing products, every day
- Grow with a VC-backed startup at the intersection of gaming, fintech, and web3
- Work from anywhere in the world with international teammates
- Own shares in the Company - enabling you to benefit from the value you create
Some additional perks…
- Co-working excursions: Travel to meet your colleagues in cities around the world
- Awesome equipment: Get everything you need to work effectively
- Remote working allowance: Put an additional $600 / year to optimize your WFH experience
- Learning and development: Attend courses, conferences and training events
- Social events: Participate in regular company events to relax and connect with teammates
- Birthday leave: Enjoy a paid day off on your special day
We hire great people from a wide variety of backgrounds, not just because it's the right thing to do, but because it makes our company stronger. Join us!
Learn more about Carry1st…
- Forbes announced that Andreessen Horowitz led Carry1st’s $20M Series A Extension
- The Hollywood Reporter announced that Nas and Google joined Carry1st’s $20M round
- PocketGamer.Biz announced that Carry1st raised $20 million to create the “Garena of Africa”
Welcome! Before we get started, we'd like to recommend that you read the entire job listing and follow the instructions in it carefully. Lots of people don't do this, and it makes a bad first impression because reading instructions and following them precisely are incredibly important things to do if you want to be successful in a remote job like this one. :)
< class="h1">Virtual Data Steward- 100% Remote—WFA (Work From Anywhere)
- Flexible hours—work whenever you want
- Friendly and supportive globally distributed team—no matter what time it is, someone somewhere in the world is online to say hello and answer questions =)
Index is a virtual business development assistant that integrates human (that's you!) and artificial intelligence directly into the customer relationship management (CRM) software used by law firms. Firms use us to ensure that the data in their CRM stays accurate and up to date. ⚙
Our team is fully distributed with colleagues across four continents and a few full-time world-wanderers who land in a different time zone every few weeks.
We're committed to building a company culture that values flexibility, bilateral feedback, and continuous improvement at every level.
You can check out the company website here: https://index.io/
< class="h2">The PositionWe're looking for a Virtual Data Steward to research the people in our client firms' CRMs. We'll equip you with the tools to find their LinkedIn pages and email addresses, and you'll use those tools to make sure that each person's details are complete, correct, and up-to-date.
- The starting pay rate is $4 USD per hour
- 100% remote working environment with flexible scheduling (you decide when you work and how many hours you work)
- There will be opportunities for advancement into QA (Quality Assurance) and Team Management roles based on your personal preferences and your performance metrics (volume, speed and accuracy)
- At least one year of professional work experience, preferably dealing with data entry and/or customer relationship management.
- English language proficiency—currently all of our customers are headquartered in the United Kingdom and the USA.
- Linguistic learner—we can hand you an online manual of extensive written instructions and you will be able to read and follow those instructions in order to learn how to do the job independently.
- Attention to detail—our clients buy our product to ensure that their data is accurate, so we need folks with a laser eye for catching and eliminating typos and inconsistencies.
- Good time management skills—you'll be working completely remotely! This means you can work from your backyard, the local pub, or a beach in Jamaica, but it also means you'll need to be able to manage your own time really well.
- Responsiveness—we're a young, growing company and things change a lot! You'll need to be vigilant about keeping up with instant messages, emails, and changes to the documentation for all types of tasks you work on.
- Resourcefulness—if your direct manager is offline, you'll need to be able to look up the answers to your questions in our online wiki, work through them with with a teammate, or find a workaround until leadership can get back to you.
- Kindness—we're a global community with teammates from all sorts of different places and backgrounds, and we think it's important to approach our differences with compassion and curiosity.
- Research professional contacts in our client firms' customer relationship management (CRM) databases to locate their LinkedIn or other business profiles
- Update each contact's details (Work Status, Company Name, Job Title, Location, and Email Address)
- Maintain satisfactory volume and accuracy metrics
- Do a minimum of 400 tasks per week (each task takes 60-90 seconds)
- Maintain an average accuracy score of ≥ 85%
- Be responsive to communication from your team (this includes keeping up to date with work emails, announcements made in Slack channels relevant to your job, direct Slack messages from management, and ongoing changes to the task guide documentation for any tasks you work on)
- Our expectation is that you respond to any messages or emails within 48 hours at the most (i.e. 2 business days), not counting weekends or holidays. This is true unless you are on the out-of-office calendar (then we won't expect a response until your first day back).
- Attend a 30-minute company all hands meeting every other Wednesday at 8am San Francisco time (or watch the recording if you can't attend)
- Attend a 30-minute 1:1 with your direct manager every other week at a pre-agreed time that works for both of you
- Submit an invoice for your hours worked by the 7th of each month
All you need to do is attach your resume and write a quick note to introduce yourself. In your note, please include your answers to the following three questions:
- Why are you interested in being a Virtual Data Steward at Index?
- Have you ever worked remotely? If so, what was the job and what challenges did you face trying to do it remotely? If not, what challenges do you think you might face working remotely?
- What do you like to do in your free time?
We'll let you know within two weeks whether we would like to move forward. If so, you'll receive a trial task to do in a Google spreadsheet. It should take you less than 90 minutes and we'll pay you for the time if you're hired. After the trial task, a team member will reach out to let you know whether or not we'd like to schedule a 30-minute interview. You'll usually have a first interview with our People Operations Associate and a second with our Head of People Ops. After the interview(s), we'll make a decision. We're looking forward to getting to know you! :)
Leadfeeder is a fast growing international SaaS scale-up. We are on a mission to bring web intelligence into business. Simply put, Leadfeeder helps companies know more about their website visitors, which enables them to convert promising visitors into sales leads.
We’re excited to share that Leadfeeder has merged with Echobot, a leading provider of European Sales Intelligence. As a result, we now have 250+ team members based across 30+ countries and six offices across the US and Europe. Together, our companies are striving to be the leading sales intelligence and go-to-market platform in our core European and North American markets.
Are you a self-starter looking to learn and grow as part of a world-class team? We are hiring many key people around the world to strengthen our rapid growth. Join our journey!
About this role
We are looking for a Data Engineer to join the team in Europe. Your exact location within Europe doesn’t matter, as this is a remote role. You will join the Data & Analytics team that currently consists of another Data Engineer, business Data Analyst and a product analytics oriented Data Scientist.
Our Data and Engineering teams have been fully remote since 2016!
Responsibilities
- Think critically and engineer improvements to our data storage and processing
- Develop and maintain our data warehouse and data pipelining. Our current stack on AWS is: Redshift, S3, Glue, Lambda, Step Functions and Kinesis
- Work closely together with our analysts on advanced product and business analytics reporting
- Support and enable Software Engineers, SREs and Product Managers to use data more efficiently
- Use data to help us and our customers make better decisions.
Requirements:
- Strong english communication skills and team working
- Programming skills. Python is required, others such as Ruby are nice to have.
- Strong understanding of database design and design patterns, both relational and non-relational
- Experience with AWS data storage and processing products such as Redshift, Kinesis, AWS Glue, S3, etc.
- Experience with search servers such as Elasticsearch, Lucene, Solr, etc.
- Experience with standard UNIX tools
Nice to have
- Experience with Machine learning
- BI/Analytics tools such as Tableau, Google Data Studio, Apache Zeppelin or others
- Web analytics
- Developing web applications
- ELK stack
- Startups experience
Benefits:
- Personal budget for home office improvements
- Monthly financial support for using your own equipment
- Up to 12 paid local bank holidays per year
- Flexible working hours
- Access to weekly yoga / fitness / meditation classes online
- Mental Health support
- Co-working space membership support (coming soon)
- Bi-annual company retreats that we will resume once Corona allows it (in the meantime, we are hosting virtual events)
- Quarterly online team building activities with your team
If this role excites you and sounds like a great fit, please apply below!
We are looking for a Data Science Intern, with exposure and interest in Optimization, with good programming skills and business oriented. This new team member will be instrumental in growing our business internationally as we continue to make the retail world a better, more sustainable place.
At Nextail, we empower retailers to create better experiences while using fewer of the world's resources. The cloud-based platform uses artificial intelligence, prescriptive analytics, and optimization to deliver agile merchandising decisions. To date, we’re backed by more than $12M in funding from leading venture capital investors and are working with global retailers like Versace, River Island, and Pepe Jeans.
The DS Team is part of the Product Team, our goal is to multiply the impact of our product on the business. Data Science Techniques, AI, ML, Optimisation, Simulation, Visualization, are the tools we always have at hand when we face a problem, but we will never use them in vain. We work together with Product Managers, Developers, Data Engineers and Business Experts to solve the end to end problem.
The ideal candidate has a STEM background. We are looking for someone that is curious, analytical and proactive.
You will be:
-
Be a part of the development of new optimization models with experienced data scientists.
-
Work with real world data and problems from our clients.
-
Deal with large datasets of curated data.
-
You will learn about product data science development methodologies, like data ops and agile data science, but also about product discovery and delivery.
We offer:
-
High flexibility: We’re strong believers that what matters most are results. Each Nextailer is empowered, through trust and ownership, to organize their time as they see fit without jeopardizing the time or work of their colleagues.
-
Remote-first philosophy: Nextail started as a remote company and continues to offer a nice mix of remote and/or office-based environments around the world.
-
International environment: We operate across the globe, with recent operations reaching from Europe all the way to Australia, and our team alone consists of professionals of more than 15 different nationalities. While many of us are multilingual, our working language is English.
-
Diversity on all levels: United as a single team, we celebrate ersity at every dimension*. Professionally speaking, are you ready to work alongside tech geniuses, data science magicians, and fashionistas? You’ll have teammates with extensive experience in a wide variety of professional fields, including technology, retail, consulting and entrepreneurship.
-
The laptop of your choice: We want you to work with the tools that are most comfortable for you!
-
Paid internship: The internship will be compensated with 800 gross euros per month.
-
You must have the ability to work through University “Convenio” and ideally be able to work full-time once internship finishes.
-
You are graduating in engineering, mathematics or a similar degree.
-
You have proficiency in English (other languages will be a plus).
-
You have a mathematical background: statistics, probability, operations research and knowledge about fundamentals optimization concepts (objective function, constraints, shadow price, GAP, etc.)
-
You come with true interest and good programming skills.
-
You are detail oriented, organized and an analytical person.
*Nextail is an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
To all recruitment agencies: Nextail does not accept agency resumes. Please do not forward resumes to our jobs alias, Nextail employees or any other organization location. Nextail is not responsible for any fees related to unsolicited resumes
6 Month Contract Project | 100% Remote, Minneapolis Client
$55- $70/hour W2 Pay
Hello Temp is looking for a contract Senior Sales Strategy and Operations Analyst to drive analytics, strategies, planning for a sales team. This project role interacts with various sales and marketing leaders and is responsible for building and influencing commercial sales models to enable the organization to grow at scale. This role requires a high level of business acumen, customer solution focus, multi-tasking skills, comprehensive organizational skills and analytical ability.
< class="h3"> < class="h3">Responsibilities- Create, execute, and lead sales solutions that drive profitable growth
- Develop actionable analytics and propose recommendations and strategies to solve commercial opportunities in unique ways that return optimal value to our business
- Partner with cross functional team to outline and execute sales process to achieve ision and franchise level objectives
- Partner with the sales organization and appointed Advisory Boards to align on processes and projects to drive the business forward
- Assist in strategical planning
- Build strong collaborative relationships across the ision (including Sales, Marketing, Finance, Pricing & Contracting, Sales Training, and other internal groups, etc.). Ensure alignment with proposed strategies and plans and facilitate collaboration towards achieving common goals and outcomes
- Participation in various sales related projects.
Requirements
- Hands-on experience with financial and statistical software, Tableau and/or Anaplan preferred
- Expertise in MS Excel (creating spreadsheets and using advanced formulas)
Benefits
- Paid Time Off (Vacation, Sick & Public Holidays)
- Training & Development
THE ROLE 🤩
You will be the first hire on our data team, and will be responsible for building out the foundations for our data and our analytical frameworks. You will help us tell stories through data, from whether or not certain features are succeeding, to uncovering interesting aspects of our users behavior.
Examples of the work you will be doing include:
-
Performing analysis on different cohorts of users to understand our users better through data. Understanding what our users really want through the actions they take within the product.
-
Helping us understand our transactions by running SQL queries. Do incentives for spending on eco-friendly merchants work? Are people struggling to topup the card?
-
Building out Mixpanel and Tableau dashboards that help us understand strategically whether the product is doing well. Helping us track things like app opens and deposits.
-
Building the data analytics systems and processes that will allow our data understanding to scale to hundreds of thousands of user.
-
Collaborating with engineers to ensure we gather the right data for product and business insights.
-
Partnering closely with senior leadership to gather requirements for successful dashboard designs, data visualizations, and ad-hoc reports on a timely basis.
SOUND LIKE YOU? 👐
-
You have at least 2 years of previous experience with BI tools like Mixpanel, Tableau and SQL.
-
You’ve worked with data analytics on a mobile app before
-
You have a strong product sense, and are able to infer from the data whether the product is doing well
-
A knowledge of statistics is preferred but not required
-
You are an excellent writer - able to communicate and convince people with a structured use of data and clear argumentation.
-
You’re watching the climate crisis unfold with real concern and looking for an opportunity to e in and make a difference
-
Ready and willing to work as part of a fully dispersed (remote) team.
Sinch Email is looking for a Sr. Sales Operations Analyst who is data-driven and passionate about jumping into problems and effectively leading change. The Sales Operations team works across Sinch Email's global organization to enable Sinch Email's growth and business success, by operationalizing the go-to-market strategy through optimization and scalability. Responsibilities include deploying and maintaining critical reporting and analytics for the go-to-market tea, driving data management and data process improvements, and developing data-driven insights that drive go-to-market decisions.
< class="h3">Responsibilities- Responsible for reviewing, analyzing, forecasting and recording new sales bookings in accordance with finance and company policies
- Develop and maintain all reporting needs for Sales and go-to-market operations, including Lead Generation, Support and Retention metrics
- Provide access to key data for the sales and leadership teams through the build and implementation of reports and dashboards in our CRM and BI tools
- Analyze sales data to formulate forecasts, financial models and variance analysis
- Provide data analysis on sales and marketing activity, pipeline, and account management to identify business trends and areas for improvement.
- Focused on explaining the ‘Why?’ of historical results and forward-looking decision support
- Deliver key business analytics, insights and metrics packages to sales and executive leadership, including preparing Board-level reports
- Recommend and analyze critical metrics to produce trends for success measurements, areas of risk, and future benchmarks
- Collaborate with team members to review data-driven insights and identify trends impacting sales effectiveness and ability to achieve sales goals
- Establish a data management process that defines the frequency and audience data is shared with
- Collaborate with internal partners to scope requirements for new data initiatives and reporting
- Ensure data and results are provided within applicable service level agreements
Requirements
The ideal candidate has a strong analytical background with a proven understanding of business process reporting and analysis within a fast-paced, sales environment. A strong comfort-level with key sales performance metrics, financial modeling, and sales/revenue operations concepts will be essential to success in this role.
In addition, your experience and qualifications will include:
- 3 to 5 years' experience as a Sales Data, Financial or Business Analyst
- Experience in a high-growth SaaS or software company preferred but not required
- Salesforce.com experience: complex dashboards and reports, custom report types and understanding of data architecture
- Expert-level skill in Excel and experience with data warehousing and visualization/analytics/BI applications like Looker, Tableau, PowerBI, etc.
- Results-driven with strong decision-making skills and the ability to prioritize multiple objectives
- A genuine commitment to providing superior customer service to internal stakeholders
- Advanced critical thinking and creative problem solving skills
- Adept at communicating findings in a compelling non-technical manner
- Able to work on issues independently and drive improvement initiatives
- Excellent written and verbal communication skills, and exceptional attention to detail
- Experience and proven discretion handling protected, confidential, and sensitive information
Benefits
- STAY HEALTHY: We offer 100% employer-paid comprehensive medical, dental, and vision plans. A variety of supplemental plans are also provided to meet your inidual needs including access to telehealth for all participants.
- CARE FOR YOURSELF: Take advantage of our free virtual counseling resources through our global Employee Assistance Program. Your mental health is as important as your physical health.
- SECURE YOUR FUTURE: Plan for your future with our Roth and Pre-tax 401(k) options including an employer match for all participants.
- TAKE A BREAK: Enjoy 5+ weeks of paid time off. We value balance and understand that performance at work requires time to rest at home and/or rejuvenate on vacation.
- PUT FAMILY FIRST: We know that families can be built in a variety of ways; therefore, we offer paid parental leave and family planning support through Maven.
- WORK WHEREVER: Our flexible remote work offerings allow you to work wherever you’re the most productive and successful. It’s what you do, not where you work, that matters.
- TREAT YOURSELF: Our comprehensive anniversary program offers a personalized experience in recognition of milestones achieved.
- MAKE AN IMPACT: Support betterment in your community and beyond by taking paid time off to support a volunteer program of your choice.
We embrace ersity and equal opportunity in all aspects of our business. We are committed to building a company that empowers iniduals from a erse set of backgrounds and values ersity of thought as a beacon for performance. The more inclusive we are, the better our work will be.
Additional Information:
(*Colorado Only*) Minimum salary of $76,700 + benefits.
*Note: Disclosure as required by sb19-085 (8-5-20) of the minimum salary compensation for this role when being hired in Colorado.
Working Location
We are a mixture of remote and hybrid working. Our primary location and only hybrid office is in London and we have remote teams across locations globally which overlap well with the GMT time zone.
Applicant requirements
- Able to overlap with the GMT UK timezone and hours of 10am - 5pm
- Our common company language is English and applicants must be able to communicate in this language fluently
- Must be able to travel to London for in person company events/workshops 2-5 times a year without a requirement of a visa or long haul travel
What we are looking for
We’re looking for a talented Senior Data Engineer to join our team. You will be working with our Data Science team and our engineering teams to bring your experience, insights and knowledge of how to extract data, create awesome tools and build out the infrastructure! As all of those are crucial elements to the future of Drest this is a really exciting opportunity to showcase your skills!
What you will do..
- Create and maintain optimal data pipeline architecture to move data between our games and data warehouses.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure, backend and servers for greater scalability.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, NoSQL and AWS big data technologies.
- Create data tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Collaborate with your peers to ensure quality deliverables.
What you will bring..
- 4+ years of Data engineering experience of supporting data scientists and analysts with building a strong data pipelines and tooling
- Proven experience and understanding of multiple database technologies, especially RDBMS, SQL and NoSQL
- Experience with event streaming platforms (Kinesis or Kafka)
- Good knowledge of AWS infrastructure
- Experience in using Python
- Proven knowledge of streaming or micro-service driven architecture
- Experience with ETL across large data sets
- Self-driven you are able to proceed with little supervision
- Comfortable with Git
- Great communication skills, both verbal and written
- BS or MS degree in Computer Science, Analytics, BI, Finance or other analytical disciplines or have similar experience
- Strong work ethic, intellectual curiosity, and drive to complete projects
Nice to haves
- Previous experience working as a data engineer in the mobile gaming industry
- Experience with tools like Terraform, Segment, Docker and Kubernetes
- Experience with Spark or Hadoop
- Experience with PostgreSQL
- Experience with GraphQL
- Experience with JavaScript/TypeScript
- Experience with Rails
- Experience with web sockets for push updates
- Experience with Agile methodologies
What you will get in return
- A competitive salary
- Private Medical Insurance
- Dental, Optical and Hearing cover
- Cycle Scheme
- Life Assurance
- 25 days annual leave + Local Bank Holidays
- Birthday Leave
- Cross-team learning opportunities
- Hybrid/Remote working options
More about DREST
DREST is the world’s first interactive luxury lifestyle & styling game, combining state-of-the-art technology with a luxury fashion and beauty styling experience that has real-time content at its core.
We are an award-winning fast-growth digital platform led by industry experts from the tech, luxury fashion, gaming and e-commerce worlds.
Centered around gamification, entertainment and creativity, DREST combines state-of-the-art technology with in-game fashion discovery, styling and shopping, all inspired by a real-time content feed reflecting the daily global fashion and lifestyle zeitgeist.
Our mission is to open up the fun, fantasy and creativity of the exclusive world of fashion, lifestyle and luxury to all via social gaming. By doing this, we are creating new and meaningful communications for one of the most exciting and impactful industries in the world through the application of gaming and proprietary technology and seminal art and design.
The leader in the development of luxury, lifestyle & beauty games, DREST partners with the very best of luxury brands worldwide including Gucci, Prada, Valentino, Burberry, Cartier, Breitling, Fenty, Armani, NARS and Bang & Olufsen amongst 250 others. DREST’s e-commerce partner is Farfetch, the world’s leading luxury fashion marketplace where everything one plays with is shoppable in real life. Named by Fast Company as one of ‘The Most Innovative Gaming Companies’ in the world (2021) DREST is forging ahead in this groundbreaking gaming genre.
Our employees are spread across the globe and we pride ourselves on our multiculturalism & ersity. We hire from the exciting worlds of gaming, tech, luxury fashion & lifestyle bringing a variety of perspectives closer together with the aim of creating inclusive, dynamic, equitable & original products tailored to our audiences around the world.
Soon Swimply will enter a key inflection point on its growth path as we expect to substantially scale our company over the next coming months. Our marketplace will soon be widely available throughout the country as we reach to exponentially more hosts and guests. We have several core and foundational data investments in the preceding months that we expect this role to contribute greatly to.
As Swimply’s first hire on the data team, you will drive the company’s ability to leverage data in every decision making process. From the executive suite down, there is a desperate desire for the greater use of data in identifying opportunity areas, understanding key successes and failures in the company, and in guiding our long term strategy. In this role, you will be responsible for getting data-driven insights into every corner of the business through reporting, analysis, and through your influence in each team’s prioritization process.
Requirements
Qualifications:
- 3+ years experience in SQL (preferably Redshift/PostgreSQL)
- Experience in developing data pipelines and ETLs
- Experience in Python or R for analysis and model development
- A proven record in working with product and engineering teams in driving decisions based on data
Nice to Haves:
- Experience with Looker
- Experience in aggregating and reporting geospatial data
About us
Love, Bonito is the leading vertically integrated, omni-channel womenswear brand in Southeast Asia today. Officially founded in 2010, we have grown to over 300 people strong and are proudly headquartered in Singapore with an omni-channel presence in Indonesia, Malaysia and a retail franchise in Cambodia. In addition, we ship internationally to 19 countries including Australia, Brunei, Cambodia, Canada, Hong Kong SAR China, Indonesia, Japan, Macau, Malaysia, Myanmar, New Zealand, Philippines, Saudi Arabia, Singapore, South Korea, Taiwan, Republic of China, United States of America and Vietnam
We recently clinched US$50million for our Series C funding and have plans to venture deeper into and beyond fashion, to be a true life partner for the everyday Asian woman.
The team
The Love, Bonito team is a passionate, dynamic, innovative and fun-loving family. From fashion-lovers, savvy marketers to tech whizzes, we have a erse team of talented iniduals with one unified focus - our customer, the Love, Bonito woman. She is at the heart of everything we do and we pride ourselves in always taking an innovative, data-centric yet considerate approach in creating the right experiences, products and content for her. With big dreams and a grand mission, we’re looking for great like-minded people to join us - people who are as passionate, fearless and entrepreneurial.
If you’re looking for a dynamic, no corporate-BS environment to learn, grow, and really make an impact, we could be the perfect fit for you!
The role
You’ll have a front seat experience in impacting women across the globe through fashion and data.
You will play a pivotal role in championing a data-driven approach in our decision-making and empowering our organization to act smarter, faster and more efficiently. You’ll be responsible for designing and implementing a variety of data-enabled solutions, generating actionable insights and launching direct decision output models to improve our business. In addition, you’ll play an important role in setting up best practices for our analytics workflow and act as a leader and mentor for our team.
You should have/ be
- A Love, Bonito Culture Fit
- Good grasp of the Love, Bonito brand and unique proposition
- A genuine curiosity to know how things work and how to make them better
- An entrepreneurial, self-starter, get-your-hands-dirty attitude
- Enjoy learning new things and pick them up quickly
- A demonstrated past and present result-driven data obsession
- Problem solver: You identify, scope and solve loosely defined problems; you design and execute logical action plans so that challenges can be solved systematically
- Analytical and data-driven: You love working with numbers and extracting knowledge from data. You’re extremely objective, thorough and diligent. You can also efficiently and independently query, manipulate, explore data using tools such as SQL & Python
- Business-driven: You always understand the business context first before ing deep into the data. You develop solutions and insights with business goals in mind.
- Creativity & communication: Ability to think "out of the box" and translate data into simple human language and compelling visualizations (data storytelling), to empower the broader organization to understand and leverage data
- Relationship management: Good relationship management and ability to work with other functional teams
Main responsibilities
As our Data Science Lead, you will collaborate closely with the broader data & analytics team and cross-functional stakeholders to drive growth and efficiency across our business with a focus on recommendation systems to bring fun personalization experiences to our community and inventory & supply chain optimization to help us produce the right quantities, reduce waste and bring the right products to our customers.
Creating best practices & mentoring our analytics team
- Act as a data science expert within the data team and mentor junior members of the team
- Key contributor for the team’s data science strategy and roadmap - defining success metrics in close collaboration with business and functional stakeholders
- Share and demonstrate the team’s work and impact to the broader organization
- Work alongside data engineering and business intelligence to support our data infrastructure development to support both business analytics requirements and data product pipelines
Decision modeling and automation
- Identify and scope new analytics opportunities with high ROI
- Refine problem statements and translate those into analytics projects that can be clearly defined and strategically solved
- Scop, plan and deliver analytics projects to achieve business objectives (ability to break down goals into clear milestones, objectives, KPIs and deliverables)
- Develop predictive models using a variety of statistical methods and machine learning techniques
- Design, build and continuously improve our recommendation engine and its use cases
- Work closely with technical and business teams to iterate solutions, A/B test and gather actionable feedback to continuously raise the standards of our analytics capabilities
- Coordinate sessions to review model performance, brainstorm and prioritize new opportunities
- Proactively leverage resources from the open-source community and research from academia to improve the standards of our models and analytics capabilities
Analytics frameworks and insights
- Design, develop and drive effective usage of analytical frameworks that unlocks business value
- Define performance metrics, targets and ensure alignment across the company
- Design, build and maintain user-friendly dashboards for tracking business health as well as the performance of launched models and specific initiatives
- Present, prepare and coordinate ongoing performance reviews (e.g. weekly and quarterly)
New data integration and data infrastructure
- Ensure high data quality, reliability, and visibility of our analytics stack and workflow
- You’re a key contributor to the development of our analytics capabilities and collaborate closely with our data engineers to drive meaningful improvements that raise the quality and expand the scope of insights that it enables
- Assist in the evaluation of new initiatives and make key contributions to the formation of our longer-term data strategy
Requirements
Qualifications & Technical skills
- Highly proficient in SQL and Python for analytics and machine learning contexts
- Knowledge in statistical methods and experience in applying that to solve business problems
- Working knowledge of predictive modeling and machine learning from prototyping to production
- Experience with recommendation systems will be a big plus
- Experience with data visualization tools, preferably Tableau and Metabase
- Knowledge of experimental design (A/B testing)
- Experience working with cloud infrastructure such as AWS will be advantageous
- Experience in database and big data tools (especially AWS Redshift, Spark, Hadoop, Databricks platform will be advantageous)
- 5+ years of relevant experience
- Experience communicating results to leadership teams to drive the strategy
- Track-record of forming strong partnerships with cross functional teams
- M.S. or Bachelors degree with emphasis on coursework of a quantitative nature (e.g., Statistics, Data Science, Mathematics, Computer Science, Engineering)
Benefits
- A dynamic, no corporate-BS environment to learn, grow, and really make an impact
- Remote Work Stipend
- Laptop Stipend
- Forward thinking tech stack
- Supportive and awesome international teammates
- CloudWalk is an AI first company building its own technology to bring justice to the broken payment system in Brazil. We are building what one would call a “self-driving bank”.
- Some people say a company is a unicorn when it reaches a valuation of over 1 billion dollars. We are one of those companies. But we are also a much cooler type of unicorn: a freaking amazing beast that is incredibly rare, cool as hell and damn hard to catch.
- We love data.
- We love living in a time when there’s access to an immense corpus of shared knowledge and incredible statistical and computational tools to extract meaningful information from the data.
- We love thinking in high dimensions.
- We like to explore before we exploit.
- We sprinkle sci-fi references in everything we do.
- You'll have access to tons of data, but you'll also be cursed with tons of noise. Most humans are good and amazing, so every bad behavior is hidden behind a swarm of good behavior. To identify the baddies, you will need to look for clues as to what is going on, to look for patterns, to ask intelligent questions and look for answers, to deal with uncertainty, to find the story behind the data, to turn the data into information. You'll need to be a detective.
- You will create new ML models and techniques and expand on the ones we have in production.
- You will have the chance to experiment. Do you think that the hot new model from a newly published paper can be applied to some of our data? Let's try it! Do you think the atmospheric pressure, the migration pattern of birds or the number of capybaras living on the border of the Pinheiros river are good predictors of credit card fraud? Let's investigate the data! Do you think there's a different perspective to look at a certain problem that could solve it better or complement our current approach? Let's put that to the test!
- You might have taken a bunch of courses and learned all about NNs, CNNs, RNNs, SVMs, CARTs, RFs, LDA, QDA, XGB, BERT and all that nice stuff. Those are all techniques to learn a mapping from X to Y. Here, you will also have to think long and hard about what X you'll be mapping to what Y.
- If you never allow yourself to fail, you will never allow yourself to take risks. The next big breakthrough will not come from someone who's playing it safe. Here, you'll be expected to experiment and you'll be allowed to fail, as long as you learn something. Over 99% of all species that ever existed on Earth are now extinct. But the remaining 1% would not have succeeded without them.
- Not going to lie. Sometimes you will have to do things you might not particularly enjoy. There will be times when we'll be at war, when we'll be together in the trenches, fighting the enemy, shoulder to shoulder, doing whatever needs to be done.
- As a member of a fully remote and distributed team, you are expected to complete tasks autonomously, being highly collaborative and self-driven.
- You must understand what machine learning is and what it’s not, and be able to identify and implement potential applications.
- You must be able to go through large amounts of data and identify useful patterns and relationships.
- You must be comfortable with programming. You’ll be doing most of your work in Python and SQL, but we believe that coding is mostly about problem solving and that programming skills are highly transferable between languages.
- You must be in your element reading and discussing research papers and coming up with potential applications to our problems.
- You must be able to communicate and debate in English. We’re proud to be a global team with members spread all over the world. Portuguese is not required.
- Our selection method is simple but hard. If you pass, you are definitely smart.
- 1) Online technical assessment
- 2) Technical interview
- 3) Cultural interviews
- If you are not willing to do an online quiz, do not apply.
- We believe in social inclusion, respect and appreciation of all people.
- We promote a welcoming work environment, where each CloudWalker can be authentic, regardless of gender, ethnicity, race, religion, sexuality, mobility, disability or education.
As a Senior Data Engineer at Demyst, you will be powering the latest technology from leading financial institutions around the world. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration. The Senior Data Engineer reports directly to the Head of Product and works directly with the existing global team who are based in North America, New Zealand, and Australia.
This role is fully remote.
< class="h3">Responsibilities
- Develop and maintain codebases for big-name clients in line with agreed deadlines
- Ensure that coding standards are followed and maintained (including code reviews)
- “Go-to” for technical and development queries
- Subject matter expert for the platforms developed
- Utilize client requirements to inform application development and feature design
- Technical leadership responsibility, guiding the team to success
- Advise clients on target state architecture for data ingestion, management and decision workflows
- Maintain an ongoing relationship with clients by handling data-related support requests
- Work onsite with customers to manage potentially complex enterprise deployments.
- Design and build reusable use cases with prototypes to demonstrate and extend the capability of DemystData platform
- Work with clients and internal stakeholders to understand requirements and reporting processes for subsequent analysis
- Satisfy documentation needs for management of developed solutions
Requirements
- Computer Science or Data Science degree (or commensurate work experience); Master's degree preferred
- 3+ years of python or scala programming
- Experience with CSV, JSON, parquet, Avro, and other common formats
- Data cleaning and structuring (ETL experience)
- Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
- Experience with SQL
- Experience with Git
- Understands globally distributed teams and asynchronous communication methods.
Benefits
- Distributed working team and culture: work from anywhere in the world!
- Generous benefits and competitive compensation
- Be a part of the exploding external data ecosystem
- Join an established fast growth data technology business
- Collaborative, inclusive work culture
- Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
- Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
- Stretch yourself to help define and support something entirely new that will impact billions
- Work within a strong, tight-knit team of subject matter experts and overseeing a team of renaissance technical talent
- Small enough where you matter, big enough to have the support to deliver what you promise
Demyst is committed to creating a erse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage iniduals from all walks of life to apply.
Our Mission
Clarity AI brings societal impact to markets, and what that means to us is that we illuminate paths to a more sustainable world.
We do that by building-in a customizable, scalable sustainability tech kit to our clients existing workflows, which empowers them to efficiently and confidently assess, analyze and report on anything valuable to them or their clients and everything required by regulation.
We are a sustainability tech company, founded in 2017. We leverage AI and machine learning technologies to ensure sustainability dimensions are a focal point of decision making for professional investors, corporates and consumers..
In regards to financial markets, participants there have used ESG indicators (Environmental, Social and Governance) to evaluate dimensions of risk and impact. However, this kind of assessment is limited: it only considers how companies behave in these three dimensions but doesn’t consider the products and services the companies provide as part of their contribution, positive or negative, to society. Leveraging scientific research and the latest technologies, we provide decision makers with the most transparent, reliable and comprehensive capabilities and tools to assess, analyze and report on social and environmental impact.
We have received several awards that show the value that we are providing to the market:
- World Economic Forum Pioneer
- One of the most innovative projects in the US. Harvard Innovation Lab
- Top startup with impact worldwide by IMPACT Growth 2017
- Top 10 Fintech startup worldwide by BBVA Open Talent
- Young Global Leader to Founder and CEO, Rebeca Minguela 2017
Our Values
Our mission drives us forward. Our values guide us along the way. In order to meet our goals, we require passion from everyone on the team that is driven by our purpose. We require excellence in everything we do with inidual expertise as a key dimension to our success. We require everyone to not just talk the talk when it comes to ethics and values, but also to walk the walk and live those values by example.
We are different. We work hard to become the best place to work and pride ourselves on our culture:
- Data-driven: Promoting objective, fact-based and solution-oriented discussions.
- Independent: Accessible and unbiased, we aren’t satisfied with the traditional way of doing things.
- Transparent: Communicating feedback transparently, constructively and in real-time.
- Achievement-oriented: Demanding excellence and celebrating and rewarding the best..
- Flexible: Working flexibly in the broadest sense (e.g., schedule, location, vacation, styles).
Our Team
Our people are our main asset. Having doubled our headcount in 2021, we are now a team of more than 200 highly passionate iniduals coming from 30 different nationalities and composed of professionals from leading tech, consulting and banking firms, entrepreneurs, PhDs from top research institutions, and MBA graduates from top business schools. Our headquarters are in New York City, and we have additional offices in London, Madrid and Abu Dhabi. Additionally, we have a strong remote-workforce of team members located in 13 different countries. Together, we have established Clarity AI as a leading company backed by investors and strategic partners such as BlackRock, SoftBank and Deutsche Börse who believe in us and share our goals.
Our leadership inspires. Our Founder and CEO, Rebeca Minguela, is a successful entrepreneur who has been recognized as one of the most distinguished leaders under the age of 40 by prestigious institutions like the World Economic Forum. Rebeca is joined by a superior leadership team coming from erse backgrounds, countries and experiences.
We care about our people and think they deserve the best. We currently offer equity packages and other benefits on top of cash compensation. We aim to continue improving and shaping our offering in all areas of compensation and total rewards to ensure our teams’ efforts are rewarded and we remain competitive.
Requirements
What we are looking for?
We like data scientists that work with data in a scientific way and that share our passion for bringing social impact to the markets and making a difference.
At Clarity you will help develop the “gold standard” for social impact that will be used by investors, consumers and governments:
- By estimating the coverage of people’s needs by any company or institution anywhere using a bottom-up approach.
- Translating unstructured, complex data into quantitative, insights-first frameworks of social impact, making intelligent approximations when needed to put your algorithms to work at scale.
- Working side-by-side with a top-talent team from business and academia.
As the ideal candidate for the position:
- You hold a degree BS/MS in a quantitative field (CS, Engineering, Math, Physics, … ).
- You have experience in several of the following:
- Analyzing and getting insights from complex, unstructured and incomplete datasets.
- Using data visualizations to generate insights and communicate results.
- Hands-on knowledge of data science and Big Data tools:
- Python and its data science ecosystem (scikit-learn, pandas, numpy).
- SQL and noSQL databases
- and techniques:
- Data processing and cleaning.
- Regression, classification, clustering and other standard algorithms.
- NLP and deep learning.
- Start-up way of working:
- Self-motivation and drive to succeed.
- Getting-the-job-done mentality.
- Learn by doing, all the time.
- You are fluent in English (C1).
- EU/Spanish Work permit
Responsibilities of the role include
- You’ll immerse yourself and deep e into our ever-growing set of external and internal data flows, and produce insightful analyses that enable data-driven discovery and innovation throughout Clarity.ai and the emergent field of impact investment as a whole.
- You’ll research, design, validate, and deploy machine learned models that use structured and unstructured data and leverage advanced techniques for classification, regression, topic modeling, and NLP to produce new metrics that will be included in the Clarity platform.
- After experimenting and building with different data sources, modeling tools, and advanced techniques, you will share your findings with the team. Besides, you may eventually present your work at international conferences and top tier journals.
Our Mission
Clarity AI brings societal impact to markets, and what that means to us is that we illuminate paths to a more sustainable world.
We do that by building-in a customizable, scalable sustainability tech kit to our clients existing workflows, which empowers them to efficiently and confidently assess, analyze and report on anything valuable to them or their clients and everything required by regulation.
We are a sustainability tech company, founded in 2017. We leverage AI and machine learning technologies to ensure sustainability dimensions are a focal point of decision making for professional investors, corporates and consumers..
In regards to financial markets, participants there have used ESG indicators (Environmental, Social and Governance) to evaluate dimensions of risk and impact. However, this kind of assessment is limited: it only considers how companies behave in these three dimensions but doesn’t consider the products and services the companies provide as part of their contribution, positive or negative, to society. Leveraging scientific research and the latest technologies, we provide decision makers with the most transparent, reliable and comprehensive capabilities and tools to assess, analyze and report on social and environmental impact.
We have received several awards that show the value that we are providing to the market:
- World Economic Forum Pioneer
- One of the most innovative projects in the US. Harvard Innovation Lab
- Top startup with impact worldwide by IMPACT Growth 2017
- Top 10 Fintech startup worldwide by BBVA Open Talent
- Young Global Leader to Founder and CEO, Rebeca Minguela 2017
Our Values
Our mission drives us forward. Our values guide us along the way. In order to meet our goals, we require passion from everyone on the team that is driven by our purpose. We require excellence in everything we do with inidual expertise as a key dimension to our success. We require everyone to not just talk the talk when it comes to ethics and values, but also to walk the walk and live those values by example.
We are different. We work hard to become the best place to work and pride ourselves on our culture:
- Data-driven: Promoting objective, fact-based and solution-oriented discussions.
- Independent: Accessible and unbiased, we aren’t satisfied with the traditional way of doing things.
- Transparent: Communicating feedback transparently, constructively and in real-time.
- Achievement-oriented: Demanding excellence and celebrating and rewarding the best..
- Flexible: Working flexibly in the broadest sense (e.g., schedule, location, vacation, styles).
Our Team
Our people are our main asset. Having doubled our headcount in 2021, we are now a team of more than 200 highly passionate iniduals coming from 30 different nationalities and composed of professionals from leading tech, consulting and banking firms, entrepreneurs, PhDs from top research institutions, and MBA graduates from top business schools. Our headquarters are in New York City, and we have additional offices in London, Madrid and Abu Dhabi. Additionally, we have a strong remote-workforce of team members located in 13 different countries. Together, we have established Clarity AI as a leading company backed by investors and strategic partners such as BlackRock, SoftBank and Deutsche Börse who believe in us and share our goals.
Our leadership inspires. Our Founder and CEO, Rebeca Minguela, is a successful entrepreneur who has been recognized as one of the most distinguished leaders under the age of 40 by prestigious institutions like the World Economic Forum. Rebeca is joined by a superior leadership team coming from erse backgrounds, countries and experiences.
We care about our people and think they deserve the best. We currently offer equity packages and other benefits on top of cash compensation. We aim to continue improving and shaping our offering in all areas of compensation and total rewards to ensure our teams’ efforts are rewarded and we remain competitive.
Challenges for this role
We are looking for a Data Research Analyst that will help us build a reliable and scalable database of sustainability, social and governance information on policies through research on Companies’ CSR and ESG indicators methodology and quality assurance processes.
The role provides an opportunity to understand the reporting standards that exist and how companies sustainability performance is measured.
KEY RESPONSIBILITIES
Working as a Data Research Analyst you will be responsible for ensuring that we have the most accurate data, in order to provide the most reliable sustainability impact measurements. The Data Research Analyst role will include, but not be limited to, the following activities:
- Procure, verify and process data on policies by browsing through various financial, exchange and government regulator websites and materials.
- Adhere to quality assurance (QA) and quality control (QC) processes.
- Review company-level sustainability issues including environmental, social and corporate governance practices and synthesize these findings into broader frameworks.
- Perform ad hoc team projects as needed.
LOCATION AND DURATION OF THE CONTRACT
This position can be based in Madrid or London. Remote opportunities are available as well.
We are looking for someone who can work with us on a specific project for a period of 6 months (full-time).
Requirements
- Excellent oral and written English communication skills. (Minimum C1 level-proficient user)
- At least a Bachelor’s degree in Finance, Economics, International Relations, Environmental Studies or similar field of study.
- Intermediate knowledge of MS Excel and/or database applications.
- Strong analytical, organizational, problem-solving skills, with clear attention to detail.
- Highly motivated, independent and deeply passionate about sustainability and impact.
- Ability to work with complex data sets with consistently high levels of comprehension and to apply defined methodologies appropriately.
- Basic knowledge of Python and/or business intelligence tools a plus
DESIRED EXPERIENCE
- Ideally experience with data collection and validation
- Prior knowledge of corporate governance, ESG and/or financial markets is an advantage.
*Only Cvs in English will be taken into consideration