This Position is Closed

This job is no longer accepting applications. Check out similar opportunities below or browse all active jobs.

Job Highlights

AI-extracted key information

The ClickHouse Operations Engineer at PostHog is responsible for automating, managing, and maintaining ClickHouse infrastructure to support the ingestion, storage, and querying of vast amounts of data. This role involves optimizing performance, scaling infrastructure, and building systems for dynamic provisioning of large ClickHouse clusters.

AI-powered analysis • Data extracted from job description
PostHog logo

ClickHouse Operations Engineer

PostHogRemote (US)Engineering & Technical

Posted 1 months ago

Full-Time

Employment Type

Remote

Work Location

About This Role

ABOUT POSTHOG

We're shipping every product that companies need https://posthog.com/handbook/why-does-posthog-exist from their first day, to the day they IPO, and beyond. The operating system for folks who build products.

We started with open-source product analytics, launched out of Y Combinator's W20 cohort https://posthog.com/handbook/story. We've since shipped more than a dozen products https://posthog.com/products, including:

  • A built-in data warehouse https://posthog.com/docs/data-warehouse, so users can query product and customer data together using custom SQL insights.
  • A customer data platform https://posthog.com/docs/cdp, so they can send their data wherever they need with ease.
  • PostHog AI, https://posthog.com/max an AI-powered analyst that answers product questions, helps users find useful session recordings, and writes custom SQL queries.

Next on the roadmap are CRM, Workflow, revenue analytics, and support products. When we say every product, we really mean it!

We Are

  • Product-led. More than 100,000 companies have installed PostHog, mostly driven by word-of-mouth. We have intensely strong product-market fit.
  • Well-funded. We've raised more than $100m from some of the world's top investors https://posthog.com/handbook/strategy/investors. We're set up for a long, ambitious journey.
  • Default alive. Revenue is growing 10% MoM on average, and we're very efficient. We raise money to push ambition and grow faster, not to keep the lights on.

We're focused on building an awesome product for end users, hiring exceptional teammates, shipping fast, and being as weird as possible https://posthog.com/deskhog.

WHAT YOU'LL BE DOING

ClickHouse is the core piece of infrastructure at PostHog https://posthog.com/docs/how-posthog-works/clickhouse. Every product and customer relies on it to ingest, store, and query data.

We need someone to automate, manage, and maintain ClickHouse as we grow towards capturing trillions of events per year and having one of the world’s largest clusters.

This includes ClickHouse operations and scaling infrastructure, as well as node and instance-level performance optimization. We want to ensure that we have the right hardware deployed at the right time for each workload on ClickHouse.

You'll build systems and automations for the provisioning and scaling of our large ClickHouse clusters, handling over 100 PB's of data. You'll have the ability to investigate and experiment using the latest hardware that cloud providers have to offer in order to find the optimal setup for our solution. And yes, You'll have a budget to do this.

You'll be using Terraform, Ansible, and Kubernetes to automate the dynamic provisioning of instances and work on a bleeding edge ClickHouse implementation, like open format backed tables, and not just maintenance.

We're also building a query optimizer for ClickHouse, which means you will work on query performance tooling.

YOU’LL FIT RIGHT IN IF:

  • You bring OLAP Database Experience. This role is focussed on ClickHouse, but if you bring strong experience with other OLAP Databases, that's great. We're looking for people that went into the internals of ClickHouse and other OLAP Databases, not high level users.
  • You bring experience in automating Dynamic provisioning instances. Strong experience with utilizing Terraform, Ansible and K8s is important.
  • You bring experience with Scale and Complexity! We're building and operating high-scale complex data storage solutions, we need you to have experience with the challenges this brings.
  • You bring the Stack we need. We build using Python, Terraform, Ansible, Kubernetes, AWS, and Zookeeper (An alternative to Zookeeper is fine)
  • You’re ready to do the best work of your career. We have incredible distribution, a big financial cushion and an amazing team. There’s probably no better place to see how far you can go.

If this sounds like you, we should talk.

We are committed to ensuring a fair and accessible interview process. If you need any accommodations or adjustments, please let us know.

WHAT’S IN IT FOR YOU?

Now that we've told you what you'll be building with us, let's talk about what we'll be building for you.

Save Time & Effort

Apply to Multiple Jobs with AI

Let our AI automatically apply to hundreds of remote jobs on your behalf. Just upload your resume and set your preferences.

500+

Jobs Applied

24/7

Auto-Apply

5 min

Setup Time

Similar Active Opportunities