TanStack Start with PostgreSQL
This guide walks you through deploying a TanStack Start application with a PostgreSQL database to your own server using Haloy. Any Linux-based VPS or dedicated server will work.
The complete source code for this guide is available at: github.com/haloydev/examples/tanstack-start-postgres
What You’ll Build
A full-stack React application using:
- TanStack Start - React meta-framework with file-based routing and server functions
- PostgreSQL - Powerful, open source object-relational database system
- Drizzle ORM - TypeScript ORM for type-safe database queries
- Haloy - Simple deployment to your own server
Prerequisites
- Node.js 20+ installed
- Haloy installed (Quickstart)
- A linux server (VPS or dedicated server)
- A domain or a subdomain
- Basic familiarity with React and TypeScript
This guide uses pnpm, but you can use npm instead by replacing pnpm add with npm install and pnpm with npm run for scripts.
1. Initialize the Project
mkdir my-tanstack-app
cd my-tanstack-app
pnpm init
mkdir my-tanstack-app
cd my-tanstack-app
pnpm init
2. Configure TypeScript
Create tsconfig.json:
{
"compilerOptions": {
"jsx": "react-jsx",
"moduleResolution": "Bundler",
"module": "ESNext",
"target": "ES2022",
"skipLibCheck": true,
"strictNullChecks": true
}
}
{
"compilerOptions": {
"jsx": "react-jsx",
"moduleResolution": "Bundler",
"module": "ESNext",
"target": "ES2022",
"skipLibCheck": true,
"strictNullChecks": true
}
}
3. Install Dependencies
Install TanStack Start and React:
pnpm add @tanstack/react-start @tanstack/react-router react react-dom nitro
pnpm add @tanstack/react-start @tanstack/react-router react react-dom nitro
Install dev dependencies:
pnpm add -D vite @vitejs/plugin-react typescript @types/react @types/react-dom @types/node vite-tsconfig-paths
pnpm add -D vite @vitejs/plugin-react typescript @types/react @types/react-dom @types/node vite-tsconfig-paths
Install Drizzle and PostgreSQL:
pnpm add drizzle-orm pg dotenv
pnpm add -D drizzle-kit @types/pg
pnpm add drizzle-orm pg dotenv
pnpm add -D drizzle-kit @types/pg
4. Update package.json
Update your package.json with the required configuration and scripts:
{
// ...
"type": "module",
"scripts": {
"dev": "vite dev",
"build": "vite build",
"start": "node .output/server/index.mjs",
"db:push": "drizzle-kit push",
"db:studio": "drizzle-kit studio"
},
}
{
// ...
"type": "module",
"scripts": {
"dev": "vite dev",
"build": "vite build",
"start": "node .output/server/index.mjs",
"db:push": "drizzle-kit push",
"db:studio": "drizzle-kit studio"
},
}
Important: The "type": "module" field is crucial - without it, Node.js will treat your files as CommonJS instead of ES modules, causing errors like "This package is ESM only but it was tried to load by require". TanStack Start requires ES module support to work properly.
5. Create Vite Configuration
Create vite.config.ts:
import { defineConfig } from "vite";
import { nitro } from "nitro/vite";
import tsConfigPaths from "vite-tsconfig-paths";
import { tanstackStart } from "@tanstack/react-start/plugin/vite";
import viteReact from "@vitejs/plugin-react";
export default defineConfig({
server: {
port: 3000,
},
plugins: [
tsConfigPaths(),
tanstackStart(),
nitro(),
// react's vite plugin must come after start's vite plugin
viteReact(),
],
nitro: {},
});
import { defineConfig } from "vite";
import { nitro } from "nitro/vite";
import tsConfigPaths from "vite-tsconfig-paths";
import { tanstackStart } from "@tanstack/react-start/plugin/vite";
import viteReact from "@vitejs/plugin-react";
export default defineConfig({
server: {
port: 3000,
},
plugins: [
tsConfigPaths(),
tanstackStart(),
nitro(),
// react's vite plugin must come after start's vite plugin
viteReact(),
],
nitro: {},
});
About Nitro
TanStack Start uses Nitro as its server engine. For this deployment, we’re using the default Node.js preset, which works perfectly with Haloy. No additional configuration is needed. The empty nitro: {} object is sufficient.
Database Setup
1. Configure Drizzle
Create drizzle.config.ts:
import { config } from "dotenv";
import { defineConfig } from "drizzle-kit";
import { getDatabaseUrl } from "./src/db/database-url";
config();
const databaseUrl = getDatabaseUrl();
export default defineConfig({
out: "./drizzle",
schema: "./src/db/schema.ts",
dialect: "postgresql",
dbCredentials: {
url: databaseUrl,
},
});
import { config } from "dotenv";
import { defineConfig } from "drizzle-kit";
import { getDatabaseUrl } from "./src/db/database-url";
config();
const databaseUrl = getDatabaseUrl();
export default defineConfig({
out: "./drizzle",
schema: "./src/db/schema.ts",
dialect: "postgresql",
dbCredentials: {
url: databaseUrl,
},
});
2. Create Database Client
Create src/db/index.ts:
import "dotenv/config";
import { drizzle } from "drizzle-orm/node-postgres";
import { getDatabaseUrl } from "./database-url";
const databaseUrl = getDatabaseUrl();
const db = drizzle(databaseUrl);
export { db };
import "dotenv/config";
import { drizzle } from "drizzle-orm/node-postgres";
import { getDatabaseUrl } from "./database-url";
const databaseUrl = getDatabaseUrl();
const db = drizzle(databaseUrl);
export { db };
3. Define Your Schema
Create src/db/schema.ts:
import { integer, pgTable, timestamp, varchar } from "drizzle-orm/pg-core";
export const todos = pgTable("todos", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
title: varchar({ length: 255 }).notNull(),
createdAt: timestamp({ mode: "date" }).defaultNow(),
});
import { integer, pgTable, timestamp, varchar } from "drizzle-orm/pg-core";
export const todos = pgTable("todos", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
title: varchar({ length: 255 }).notNull(),
createdAt: timestamp({ mode: "date" }).defaultNow(),
});
4. Database Connection Helper
Create src/db/database-url.ts to handle connection string construction:
export function getDatabaseUrl() {
const postgresUser = process.env.POSTGRES_USER;
if (!postgresUser) {
throw new Error("POSTGRES_USER environment variable not found");
}
const postgresPassword = process.env.POSTGRES_PASSWORD;
if (!postgresPassword) {
throw new Error("POSTGRES_PASSWORD environment variable not found");
}
const postgresDb = process.env.POSTGRES_DB;
if (!postgresDb) {
throw new Error("POSTGRES_DB environment variable not found");
}
// In production, we use the service name 'postgres' as the host
// In development, we connect to localhost
const host = process.env.NODE_ENV === "production" ? "postgres" : "localhost";
return `postgres://${postgresUser}:${postgresPassword}@${host}:5432/${postgresDb}`;
}
export function getDatabaseUrl() {
const postgresUser = process.env.POSTGRES_USER;
if (!postgresUser) {
throw new Error("POSTGRES_USER environment variable not found");
}
const postgresPassword = process.env.POSTGRES_PASSWORD;
if (!postgresPassword) {
throw new Error("POSTGRES_PASSWORD environment variable not found");
}
const postgresDb = process.env.POSTGRES_DB;
if (!postgresDb) {
throw new Error("POSTGRES_DB environment variable not found");
}
// In production, we use the service name 'postgres' as the host
// In development, we connect to localhost
const host = process.env.NODE_ENV === "production" ? "postgres" : "localhost";
return `postgres://${postgresUser}:${postgresPassword}@${host}:5432/${postgresDb}`;
}
This helper constructs the database connection string from environment variables and automatically switches between localhost (development) and postgres (production hostname) based on NODE_ENV.
5. Create Environment File
Create .env for local development. Make sure you have a local PostgreSQL instance running or use Docker Compose.
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=todo_app
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=todo_app
6. Set Up Local Database (Optional)
For local testing, you can use Docker to run PostgreSQL without installing it:
docker run --name postgres-dev \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=todo_app \
-p 5432:5432 \
-d postgres:18
docker run --name postgres-dev \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=todo_app \
-p 5432:5432 \
-d postgres:18
This command:
- Creates a PostgreSQL container named
postgres-dev - Sets up credentials matching your
.envfile - Exposes port 5432 to your local machine
- Runs in the background
To stop the container later:
docker stop postgres-dev
docker rm postgres-dev
docker stop postgres-dev
docker rm postgres-dev
Application Code
1. Create the Router
Create src/router.tsx:
import { createRouter } from "@tanstack/react-router";
import { routeTree } from "./routeTree.gen";
export function getRouter() {
const router = createRouter({
routeTree,
scrollRestoration: true,
defaultNotFoundComponent: () => <div>404 - not found</div>,
});
return router;
}
import { createRouter } from "@tanstack/react-router";
import { routeTree } from "./routeTree.gen";
export function getRouter() {
const router = createRouter({
routeTree,
scrollRestoration: true,
defaultNotFoundComponent: () => <div>404 - not found</div>,
});
return router;
}
Note: You might see a TypeScript error about ./routeTree.gen not being found. This is expected. TanStack Start automatically generates this file when you run the dev server in the next steps.
2. Create the Root Route
Create src/routes/__root.tsx:
/// <reference types="vite/client" />
import {
createRootRoute,
HeadContent,
Outlet,
Scripts,
} from "@tanstack/react-router";
import type { ReactNode } from "react";
export const Route = createRootRoute({
head: () => ({
meta: [
{
charSet: "utf-8",
},
{
name: "viewport",
content: "width=device-width, initial-scale=1",
},
{
title: "TanStack Start Starter",
},
],
}),
component: RootComponent,
});
function RootComponent() {
return (
<RootDocument>
<Outlet />
</RootDocument>
);
}
function RootDocument({ children }: Readonly<{ children: ReactNode }>) {
return (
<html lang="en">
<head>
<HeadContent />
</head>
<body>
{children}
<Scripts />
</body>
</html>
);
}
/// <reference types="vite/client" />
import {
createRootRoute,
HeadContent,
Outlet,
Scripts,
} from "@tanstack/react-router";
import type { ReactNode } from "react";
export const Route = createRootRoute({
head: () => ({
meta: [
{
charSet: "utf-8",
},
{
name: "viewport",
content: "width=device-width, initial-scale=1",
},
{
title: "TanStack Start Starter",
},
],
}),
component: RootComponent,
});
function RootComponent() {
return (
<RootDocument>
<Outlet />
</RootDocument>
);
}
function RootDocument({ children }: Readonly<{ children: ReactNode }>) {
return (
<html lang="en">
<head>
<HeadContent />
</head>
<body>
{children}
<Scripts />
</body>
</html>
);
}
3. Create the Index Route
Create src/routes/index.tsx:
import { createFileRoute, useRouter } from "@tanstack/react-router";
import { createServerFn } from "@tanstack/react-start";
import { eq } from "drizzle-orm";
import { db } from "../db";
import { todos } from "../db/schema";
const getTodos = createServerFn({
method: "GET",
}).handler(async () => await db.select().from(todos));
const addTodo = createServerFn({ method: "POST" })
.inputValidator((data: FormData) => {
if (!(data instanceof FormData)) {
throw new Error("Expected FormData");
}
return {
title: data.get("title")?.toString() || "",
};
})
.handler(async ({ data }) => {
await db.insert(todos).values({ title: data.title });
});
const deleteTodo = createServerFn({ method: "POST" })
.inputValidator((data: number) => data)
.handler(async ({ data }) => {
await db.delete(todos).where(eq(todos.id, data));
});
export const Route = createFileRoute("/")({
component: RouteComponent,
loader: async () => await getTodos(),
});
function RouteComponent() {
const router = useRouter();
const todos = Route.useLoaderData();
return (
<div>
<ul>
{todos.map((todo) => (
<li key={todo.id}>
{todo.title}
<button
type="button"
onClick={async () => {
await deleteTodo({ data: todo.id });
router.invalidate();
}}
>
X
</button>
</li>
))}
</ul>
<h2>Add todo</h2>
<form
onSubmit={async (e) => {
e.preventDefault();
const form = e.currentTarget;
const data = new FormData(form);
await addTodo({ data });
router.invalidate();
form.reset();
}}
>
<input name="title" placeholder="Enter a new todo..." />
<button type="submit">Add</button>
</form>
</div>
);
}
import { createFileRoute, useRouter } from "@tanstack/react-router";
import { createServerFn } from "@tanstack/react-start";
import { eq } from "drizzle-orm";
import { db } from "../db";
import { todos } from "../db/schema";
const getTodos = createServerFn({
method: "GET",
}).handler(async () => await db.select().from(todos));
const addTodo = createServerFn({ method: "POST" })
.inputValidator((data: FormData) => {
if (!(data instanceof FormData)) {
throw new Error("Expected FormData");
}
return {
title: data.get("title")?.toString() || "",
};
})
.handler(async ({ data }) => {
await db.insert(todos).values({ title: data.title });
});
const deleteTodo = createServerFn({ method: "POST" })
.inputValidator((data: number) => data)
.handler(async ({ data }) => {
await db.delete(todos).where(eq(todos.id, data));
});
export const Route = createFileRoute("/")({
component: RouteComponent,
loader: async () => await getTodos(),
});
function RouteComponent() {
const router = useRouter();
const todos = Route.useLoaderData();
return (
<div>
<ul>
{todos.map((todo) => (
<li key={todo.id}>
{todo.title}
<button
type="button"
onClick={async () => {
await deleteTodo({ data: todo.id });
router.invalidate();
}}
>
X
</button>
</li>
))}
</ul>
<h2>Add todo</h2>
<form
onSubmit={async (e) => {
e.preventDefault();
const form = e.currentTarget;
const data = new FormData(form);
await addTodo({ data });
router.invalidate();
form.reset();
}}
>
<input name="title" placeholder="Enter a new todo..." />
<button type="submit">Add</button>
</form>
</div>
);
}
4. Create Health Check Route
Create src/routes/health.tsx for health checks:
import { createFileRoute } from "@tanstack/react-router";
export const Route = createFileRoute("/health")({
server: {
handlers: {
GET: async () => {
return Response.json({ status: "ok" });
},
},
},
});
import { createFileRoute } from "@tanstack/react-router";
export const Route = createFileRoute("/health")({
server: {
handlers: {
GET: async () => {
return Response.json({ status: "ok" });
},
},
},
});
This endpoint responds without querying the database, ensuring the container can be marked healthy quickly.
Docker Configuration
1. Create Dockerfile
Create Dockerfile:
FROM node:24-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
COPY . /app
WORKDIR /app
FROM base AS prod-deps
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile
FROM base AS build
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
RUN pnpm run build
FROM base
COPY --from=prod-deps /app/node_modules /app/node_modules
COPY --from=build /app/.output /app/.output
HEALTHCHECK --interval=10s --timeout=3s --start-period=10s --retries=3 CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["pnpm", "start"]
FROM node:24-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
COPY . /app
WORKDIR /app
FROM base AS prod-deps
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile
FROM base AS build
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
RUN pnpm run build
FROM base
COPY --from=prod-deps /app/node_modules /app/node_modules
COPY --from=build /app/.output /app/.output
HEALTHCHECK --interval=10s --timeout=3s --start-period=10s --retries=3 CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["pnpm", "start"]
Key points:
- Uses multi-stage builds for smaller final image
- Includes a
HEALTHCHECKthat queries/healthendpoint - Schema changes are pushed before deployment using the tunnel feature (see below)
2. Create .dockerignore
Create .dockerignore:
node_modules
.git
.gitignore
*.md
dist
.DS_Store
node_modules
.git
.gitignore
*.md
dist
.DS_Store
Haloy Configuration
Create haloy.yml:
For PostgreSQL, we need to deploy two services: the database and the application. We can define both in a single haloy.yml file.
# Global server and environment variables shared across targets
server: your-server.haloy.dev
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_DB
value: "todo_app"
targets:
# Database Service
postgres:
preset: database
image:
repository: postgres:18
port: 5432
volumes:
- postgres-data:/var/lib/postgresql
# Application Service
tanstack-start-postgres:
domains:
- domain: my-app.example.com
port: 3000
env:
- name: NODE_ENV
value: production
# Global server and environment variables shared across targets
server: your-server.haloy.dev
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_DB
value: "todo_app"
targets:
# Database Service
postgres:
preset: database
image:
repository: postgres:18
port: 5432
volumes:
- postgres-data:/var/lib/postgresql
# Application Service
tanstack-start-postgres:
domains:
- domain: my-app.example.com
port: 3000
env:
- name: NODE_ENV
value: production
Important: Replace your-server.haloy.dev with the actual server domain you configured during the Quickstart setup. This should match the server where you installed the Haloy daemon using haloy server setup.
Also update:
my-app.example.com- Replace with your actual domain or subdomainPOSTGRES_PASSWORD- Change to a strong, unique password for production
Configuration Explained
We define two targets:
-
postgres:- Uses the official
postgres:18image. - Mounts a volume
postgres-datato/var/lib/postgresqlto ensure data persistence. - Exposes port
5432. - Is accessible to other containers on the same server via the hostname
postgres.
- Uses the official
-
tanstack-start-postgres:- Your application code.
- Connects to the database using the environment variables.
NODE_ENV=productionensuressrc/db/database-url.tsuses thepostgreshostname.
Persistent Storage
The postgres target uses a named volume:
volumes:
- postgres-data:/var/lib/postgresql
volumes:
- postgres-data:/var/lib/postgresql
This ensures that even if you redeploy or restart the database container, your data remains safe on the server.
Deploy
1. Test Locally
Before deploying, verify everything works locally. Ensure you have a local PostgreSQL database running and updated .env.
pnpm db:push
pnpm dev
pnpm db:push
pnpm dev
Visit http://localhost:3000 and try adding a todo.
2. Deploy the Database
Deploy the PostgreSQL database first:
haloy deploy -t postgres
haloy deploy -t postgres
Wait for the database deployment to complete before proceeding.
Note: If you started a local PostgreSQL container for testing, stop it first to free up port 5432:
docker stop postgres-dev
docker stop postgres-dev
3. Push Your Schema to Production
Before deploying your application, you need to set up the database schema. Haloy’s tunnel feature lets you connect to the production database from your local machine:
# In one terminal, open a tunnel to the database
haloy tunnel 5432 -t postgres
# In one terminal, open a tunnel to the database
haloy tunnel 5432 -t postgres
The tunnel forwards the remote PostgreSQL port to your local machine. Now, in a separate terminal, push your schema:
# In another terminal, push your schema
pnpm db:push
# In another terminal, push your schema
pnpm db:push
Drizzle will connect to localhost:5432 (which tunnels to your production database) and apply your schema changes.
4. Deploy the Application
With the database schema in place, deploy your application:
haloy deploy -t tanstack-start-postgres
haloy deploy -t tanstack-start-postgres
5. Verify Deployment
# Check status of all targets
haloy status --all
# View deployment logs
haloy logs -t tanstack-start-postgres
# Check status of all targets
haloy status --all
# View deployment logs
haloy logs -t tanstack-start-postgres
Working with Your Production Database
The tunnel feature is useful beyond initial deployment. Here are some common workflows:
Inspecting Data with Drizzle Studio
Drizzle Studio provides a visual interface for browsing and editing your database:
# Terminal 1: Open the tunnel
haloy tunnel 5432 -t postgres
# Terminal 2: Start Drizzle Studio
pnpm db:studio
# Terminal 1: Open the tunnel
haloy tunnel 5432 -t postgres
# Terminal 2: Start Drizzle Studio
pnpm db:studio
Then open https://local.drizzle.studio in your browser to explore your production data.
Updating the Schema
When you modify your schema in src/db/schema.ts, push the changes to production:
# Terminal 1: Open the tunnel (if not already open)
haloy tunnel 5432 -t postgres
# Terminal 2: Push schema changes
pnpm db:push
# Terminal 1: Open the tunnel (if not already open)
haloy tunnel 5432 -t postgres
# Terminal 2: Push schema changes
pnpm db:push
Drizzle will show you a diff of the changes and prompt for confirmation before applying them.
Alternative: Migration-Based Workflow
The drizzle-kit push approach shown above is ideal for solo developers who want to move fast. For teams or projects that need a more controlled change management process, consider using migrations instead.
With migrations, schema changes are captured as versioned SQL files that can be reviewed in pull requests and applied consistently across environments. See the Drizzle Migrations documentation for details on this approach.