Use it all together, or pick only what you need.

A Rust framework for
compiled web applications

Build compiled websites, APIs, and admin surfaces in one fast, secure, compact Rust runtime — or adopt only the pieces you need.

Lithair can serve the compiled frontend itself, keep active state in memory for fast reads, and bring security features close to the runtime — while keeping auth, policy, event history, and other capabilities optional. Use it all together, or pick only what you need. When the workload fits, that can mean fewer services, fewer network hops, and a smaller operational footprint.

Serve the compiled site from the same runtime — this website does
Add auth, policy, and event history only when you need them
Fast in-memory reads when the active state fits
Use Lithair on its own or alongside SQL
main.rs
// A compact backend for workloads that fit the model.
use lithair::prelude::*;

#[derive(DeclarativeModel)]
struct Article {
    #[http(expose, validate = "non_empty")]
    title: String,
    #[lifecycle(audited, versioned = 3)]
    content: String,
}

fn main() {
    Lithair::new()
        .with_model::<Article>()  // CRUD API auto-generated
        .with_static_files("./public") // loaded in RAM
        .with_auth()              // sessions in memory
        .run();
}

A compiled web app example: frontend, API, auth, and state in one Rust runtime.

What is Lithair?

Lithair is a Rust web framework for building compiled websites, APIs, and admin surfaces in a compact runtime. It explores a simpler backend shape: keep the system all in one when that helps, or pick only the features you need — from frontend serving and in-memory state to auth, policy, admin tooling, and event history.

Conventional backend stack

Browser React/Vue/Svelte build
↓ HTTP
Edge Reverse proxy, TLS, static files
↓ HTTP
App Express/Django/Spring + ORM
↓ TCP + SQL
Data PostgreSQL/MySQL + Redis
Ops Docker, K8s, monitoring...

More layers, more flexibility, and a broader operational surface.

Lithair on a fitting workload

Browser Your frontend (any framework)
↓ HTTP

Single Binary

HTTP Router + static files (from RAM)
API Auto-generated from your models
Data In-memory (SCC2) + event log
Auth Sessions, RBAC, MFA — built-in

A smaller default runtime surface, with direct in-process access to active state.

Lithair is not a universal replacement for SQL or traditional stacks. Hybrid architectures are normal: keep SQL where it adds leverage, and use Lithair where a memory-first model removes unnecessary layers.

Why Lithair?

Because many teams know how quickly a backend grows in moving parts. Lithair comes from a practical question: when the workload fits, can we keep the system smaller without losing the capabilities that matter?


  REQUEST                     ACTIVE STATE (In Memory)          DURABILITY
  ───────                     ────────────────────────          ──────────

  GET /api/articles    ───>    ┌────────────────────┐
                                │                    │
       memory read              │  articles: {       │
       current state            │    "abc": {...},   │
       no SQL round trip        │    "def": {...},   │
                                │  }                  │
  POST /api/articles   ───>    │                    │   ───>   events.raftlog
                                │  sessions: {...}   │          (append-only)
       update active state      │  users: {...}      │
       + persist event           │  static_files: {...}│
       for replay               │                    │
                                └────────────────────┘
  STARTUP                              <───           snapshot + replay
                                load snapshot,               events since
                                replay events                last snapshot
                                into SCC2

Fast access to active state

Lithair keeps the hot state in memory, which can remove a database round trip from read-heavy paths and simplify request handling when the workload fits the model.

Built-in event history

Changes are recorded as immutable events. That gives you auditability, replay, and a clear record of how state evolved over time.

Smaller default operating surface

State snapshots plus event replay keep persistence close to the runtime. The result is a more compact default deployment, with fewer moving parts to configure and operate.

"The goal is not to argue against proven stacks. The goal is to ask whether some products can ship with fewer layers and still be easier to build, run, and understand."

When Lithair is a good fit

Lithair tends to work best when the workload is bounded, the active state benefits from living in memory, and event history adds real value.

1

Bounded working set

Your active application state fits comfortably in memory and can be rebuilt from snapshots plus events.

Good signal
state size stays predictable
and operationally manageable
2

Read-heavy or latency-sensitive paths

You want direct in-process access to active state instead of a database round trip on the hot path.

Good signal
most requests read current state
from a bounded dataset
3

Auditability and replay

You want a durable event history for compliance, debugging, operational traceability, or state reconstruction.

Good signal
how state changed matters
as much as the current state
4

Progressive adoption

You want to introduce Lithair in one bounded service or feature, while keeping SQL or other components where they already work well.

Good signal
hybrid architecture is acceptable
and rollout can be incremental
A good fit is about workload shape, not ideology.

Architecture

Lithair keeps the default runtime compact, but the pieces are modular. Use the parts that help and integrate with the rest of your stack where needed.

Model-driven API

Define your model once and generate routine API and validation layers from it, reducing repetitive backend plumbing on suitable workloads.

State evolution

Evolve models over time while keeping state reconstruction and event history explicit instead of spreading that logic across multiple layers.

Auth and policy

Sessions, roles, and permissions can live close to the runtime instead of being spread across separate services from day one.

Security capabilities

Optional auth features such as MFA can be added when the product needs them, without forcing them into every deployment.

HTTP layer

Serve routes, APIs, and static assets from the same runtime when that keeps the system smaller and easier to operate. Here, Astro builds the site into static assets, then Lithair loads and serves them from memory at startup.

Operational safeguards

Basic protections such as rate limits, IP rules, and CORS handling are available in the runtime by default.

Event history

Keep a durable record of change and optional historical views when the domain needs replay, debugging, or auditability.

Admin surface

Built-in admin tooling can reduce the amount of backoffice code you need to write early on.

Optional replication

When distribution is needed, replication can be added as part of the architecture instead of becoming day-one complexity.

When a traditional stack is still a better choice

Lithair is not the best answer for every backend. Conventional stacks remain the better choice when their strengths match the problem.

Traditional stacks usually win when

  • Your domain depends on rich relational queries, joins, or reporting.
  • Your data footprint is too large or too variable for a practical memory-first model.
  • You rely on mature SQL tooling, analytics workflows, or established database operations.
  • Your team already has strong leverage around Postgres and a conventional backend stack.

Hybrid adoption remains a strong option

  • Keep SQL for analytical, relational, or reporting-heavy workloads.
  • Use Lithair for one bounded service or feature where active state benefits from living in memory.
  • Adopt progressively instead of forcing an all-or-nothing rewrite.
  • Choose architecture by workload, not by ideology.

Performance

Lithair can be very fast on the right workload because active state is kept in memory and reads can avoid a database round trip on the hot path.

Fast reads

When requests hit active in-memory state, Lithair can deliver very low-latency reads on suitable workloads.

Workload-dependent results

Latency and throughput depend on data shape, read/write mix, snapshot strategy, concurrency, hardware, and deployment shape.

Reproducible claims only

Any benchmark should be shared with its dataset, hardware, workload profile, and test method. Numbers without context are not useful.

Any numbers shown for Lithair should be treated as illustrative, reproducible, and workload-specific — not as universal promises. The practical claim is narrower: when the model fits, Lithair can remove layers that often add latency, operational overhead, and implementation complexity.

Get started

Try the model in a few minutes.

# Install the CLI

$ cargo install lithair-cli

# Create a new project

$ lithair new my-app

# Run it

$ cd my-app && cargo run

# Your server is running

✓ Listening on http://127.0.0.1:3007

✓ Active state loaded in memory

✓ Event log ready

YR

Yoan Roblet

DevOps Engineer (5 years) · Ops (20 years) · Developer

I love DevOps. For large teams and large systems, the surrounding tooling is often the right choice. But after enough time operating complex systems, it is natural to ask whether every project really needs the full surface area.

Lithair started as a practical experiment: when the workload fits, can we ship something useful with fewer layers, less glue, and a shorter path from code to production? The goal is not to dismiss the rest of the ecosystem. It is to see where a smaller model is enough — and sometimes genuinely simpler to build and run.