hatchet-dev/hatchet

🪓 Run Background Tasks at Scale

6.5K stars290 forksUpdated Jan 26, 2026
npx skills add hatchet-dev/hatchet

README

Hatchet Logo

Run Background Tasks at Scale

Docs License: MIT Go Reference NPM Downloads

Discord Twitter GitHub Repo stars

Hatchet Cloud · Documentation · Website · Issues

What is Hatchet?

Hatchet is a platform for running background tasks and durable workflows, built on top of Postgres. It bundles a durable task queue, observability, alerting, a dashboard, and a CLI into a single platform.

Get started quickly

The fastest way to get started with a running Hatchet instance is to install the Hatchet CLI (on MacOS, Linux or WSL) - note that this requires Docker installed locally to work:

curl -fsSL https://install.hatchet.run/install.sh | bash
hatchet --version
hatchet server start

You can also sign up on Hatchet Cloud to try it out! We recommend this even if you plan on self-hosting, so you can have a look at what a fully-deployed Hatchet platform looks like.

To view full documentation for self-hosting and using cloud, have a look at the docs.

When should I use Hatchet?

Background tasks are critical for offloading work from your main web application. Usually background tasks are sent through a FIFO (first-in-first-out) queue, which helps guard against traffic spikes (queues can absorb a lot of load) and ensures that tasks are retried when your task handlers error out. Most stacks begin with a library-based queue backed by Redis or RabbitMQ (like Celery or BullMQ). But as your tasks become more complex, these queues become difficult to debug, monitor and start to fail in unexpected ways.

This is where Hatchet comes in. Hatchet is a full-featured background task management platform, with built-in support for chaining complex tasks together into workflows, alerting on failures, making tasks more durable, and viewing tasks in a real-time web dashboard.

Features

📥 Queues

Hatchet is built on a durable task queue that enqueues your tasks and sends them to your workers at a rate that your workers can handle. Hatchet will track the progress of your task and ensure that the work gets completed (or you get alerted), even if your application crashes.

This is particularly useful for:

  • Ensuring that you never drop a user request
  • Flattening large spikes in your application
  • Breaking large, complex logic into smaller, reusable tasks

Read more ➶

  • Python
    # 1. Define your task input
    class SimpleInput(BaseModel):
        message: str
    
    # 2. Define your task using hatchet.task
    @hatchet.task(name="SimpleWorkflow", input_validator=SimpleInput)
    def simple(input: SimpleInput, ctx: Context) -> dict[str, str]:
        return {
          "transformed_message": input.message.lower(),
        }
    
    # 3. Register your task on your worker
    worker = hatchet.worker("test-worker", workflows=[simple])
    worker.start()
    
    # 4. Invoke tasks from your application
    simple.run(SimpleInput(message="Hello World!"))
    
  • Typescript
    // 1. Define your task input
    export type SimpleInput = {
      Message: string;
    };
    
    // 2. Define your task using hatchet.task
    export const simple = hatchet.task({
      name: "simple",
      fn: (input: SimpleInput) => {
        return {
          TransformedMessage: input.Message.toLowerCase(),
        };
      },
    });
    
    // 3. Register your task on your worker
    const worker = await hatchet.worker("simple-worker", {
      workflows: [simple],
    });
    

...

Read full README