Help us to increase the number of successful products in the world!
🌍 Location: We are full-remote and globally distributed! Our current team is distributed between GMT-8 and GMT+2, so we currently only hire in these timezones.
🎤 Interview process: 1) Call with one of our Talent Partners, 2) 60min technical interview, and 3) 15min call with a co-founder, 4) PostHog SuperDay (paid day of work). Read more about our interview process.
🖥️ Team: All engineering teams are hiring.
đź’° Compensation: Please check our compensation calculator.
🦔 Read more about how we hire and how we think about diversity & inclusion.
About PostHog
We're shipping every product that companies need to run their business from their first day, to the day they IPO, and beyond. The operating system for folks who build software.
We started with open-source product analytics, launched out of Y Combinator's W20 cohort. We've since shipped more than a dozen products, including:
A built-in data warehouse, so users can query product and customer data together using custom SQL insights.
A customer data platform, so they can send their data wherever they need with ease.
Max AI, an AI-powered analyst that answers product questions, helps users find useful session recordings, and writes custom SQL queries.
Next on the roadmap are CRM, messaging, revenue analytics, and support products. When we say every product that companies need to run their business, we really mean it!
We are:
Product-led. More than 100,000 companies have installed PostHog, mostly driven by word-of-mouth. We have intensely strong product-market fit.
Default alive. Revenue is growing 10% MoM on average, and we're very efficient. We raise money to push ambition and grow faster, not to keep the lights on.
Well-funded. We've raised more than $100m from some of the world's top investors. We're set up for a long, ambitious journey.
We're focused on building an awesome product for end users, hiring exceptional teammates, shipping fast, and being as weird as possible.
Things we care about
Transparency: Everyone can read about our roadmap, how we pay (or even let go of) people, our strategy, and how we work, in our public company handbook. Internally, we share revenue, notes and slides from board meetings, and fundraising plans, so everyone has the context they need to make good decisions.
Autonomy: We don’t tell anyone what to do. Everyone chooses what to work on next based on what's going to have the biggest impact on our customers, and what they find interesting and motivating to work on. Engineers lead product teams and make product decisions. Teams are flexible and easy to change when needed.
Shipping fast: Why not now? We want to build a lot of products; we can't do that shipping at a normal pace. We've built the company around small teams – autonomous, highly-efficient groups of cracked engineers who can outship much larger companies because they own their products end-to-end.
Time for building: Nothing gets shipped in a meeting. We're a natively remote company. We default to async communication – PRs > Issues > Slack. Tuesdays and Thursdays are meeting-free days, and we prioritize heads down building time over perfect coordination. This will be the most productive job you've ever had.
Ambition: We want to solve big problems. We strongly believe that aiming for the best possible upside, and sometimes missing, is better than never trying. We're optimistic about what's possible and our ability to get there.
Being weird: Weird means redesigning an already world-class website for the 5th time. It means shipping literally every product that relates to customer data. It means building an objectively unnecessary developer toy with dubious shareholder value. Doing weird stuff is a competitive advantage. And it's fun.
Who we’re looking for
We're seeking a backend engineer who thrives on building robust, high-performance data pipelines. You’re passionate about turning complex ELT workflows into reliable products and working deeply with modern data formats like Arrow, Iceberg, and Delta. You enjoy pushing the boundaries of what data tools can do while ensuring they remain stable and production-ready.
Ideally, you're as much a data engineer as you are a software engineer. You would be a great fit if you've used the tools of the modern data stack (maybe you've even built some) and you've also built complex software from the ground up.
What makes this role unique
At PostHog, data warehousing is both a core product for our users and a foundational platform for our internal teams. You'll help build the tools that enable users to import, transform, and analyze their data via SQL, while also creating the infrastructure that powers current and future PostHog features.
Our data stack is end-to-end:
We’ve developed our own SQL parser from scratch
Built pipelines to import data from APIs and databases
Created a SQL editor for data exploration
Developed a materialization pipeline to transform and serve data efficiently
There’s a huge breadth of challenges and opportunities to tackle, and nothing is off-limits. Data tooling is a first-class product at PostHog, not an afterthought. You’ll have the chance to build the data tools you’ve always wanted to use.
What you'll be doing
Our team stretches from California to Hungary, but we are remote-first and take it seriously. Every team at PostHog makes an effort to develop the best way to work for its team members. You will be working with a small team that is currently 3 engineers and 1 product manager.
Your core responsibility will be to maintain and grow our data pipeline that enables our users to import their data from API and database sources. The work will span anything from expanding the source library for our users to refactoring how we stream data from ClickHouse to object storage using Arrow. You should be very comfortable building well-architected and well-tested code. However, you are also pragmatic and know how to scope implementations in a way that allows you to ship fast.
Examples of day-to-day work:
Designing and implementing a core interface that makes it easy to expand our source library
Debugging memory issues in our data pipeline service
Implementing granular schema control for users to configure when setting up an import
Building a graph traverser to materialize user-submitted queries
Instrumenting usage tracking to allow users to understand their import volume and costs
Requirements
Experience with Python and Django. Our core application backend and data pipeline services are built with Python and Django
Hands-on experience with the Arrow data format. We stream data from ClickHouse to object storage with Arrow as the intermediary format
Strong skills in designing, architecting, and building data systems from the ground up
While frontend may not be your primary focus, you’re not afraid to dive in when needed
Nice to have
Experience using Temporal- Experience with Clickhouse
Experience with open source table formats (Iceberg or Delta tables)
Experience with ASTs
You've carried a pager and have dealt with incidents
You're comfortable with provisioning and deploying infrastructure
We believe people from diverse backgrounds, with different identities and experiences, make our product and our company better. That’s why we dedicated a page in our handbook to diversity and inclusion. No matter your background, we'd love to hear from you! Alignment with our values is just as important as experience! 🙏
Also, if you have a disability, please let us know if there's any way we can make the interview process better for you - we're happy to accommodate!