Skip to main content

Why flow-state.dev?

Every AI feature needs the same infrastructure: call an LLM, stream the response, manage state across turns, handle errors gracefully, sync everything to the UI. Teams rebuild this from scratch every time — ad-hoc orchestration, hand-rolled SSE, application-specific retry logic, state scattered across closures and databases.

flow-state.dev makes these concerns framework primitives. You write the logic that matters. The framework handles everything else.

Design principles

These beliefs shaped every API decision in the framework. For the full discussion, read The flow-state.dev Philosophy.

  • Built for AI execution — AI apps are long-running, non-deterministic, streaming, and stateful. The framework treats these as the default execution model, not edge cases bolted onto request/response.
  • Foundations that unlock paradigms — The framework ships primitives, not pre-built solutions. The goal is foundations powerful enough that developers discover patterns we haven't imagined yet.
  • The framework owns the machinery — You define blocks with typed contracts. The framework runs the tool loop, manages retries, persists state, assembles context, and streams items. The architecture is the enforcement mechanism.
  • State that evolves — Memory isn't a conversation transcript. State evolves through typed operations, resources accumulate knowledge, and the framework feeds it back into the model's context across turns.
  • Streaming-first — Items stream as they're produced with sequence-number resume. Batch is a simplification of the streaming model, not the other way around.
  • Observability is structural — The stream is the trace. Every item carries provenance — block name, instance ID, parent block, phase, step index. No instrumentation required.
  • Your code, your control — The framework owns the runtime. You own everything above it — blocks, flows, schemas, projections all live in your repo.

What it looks like

import { defineFlow, generator, handler, sequencer } from "@flow-state-dev/core";

const chat = generator({
name: "chat",
model: "gpt-5-mini",
prompt: "You are a helpful assistant.",
history: (_input, ctx) => ctx.session.items.llm(),
user: (input) => input.message,
tools: [searchDocs, createArtifact],
});

const pipeline = sequencer({ name: "pipeline" })
.then(chat)
.then(trackUsage)
.rescue([{ when: [ModelError], block: fallback }]);

export default defineFlow({
kind: "my-app",
actions: { chat: { block: pipeline, userMessage: (i) => i.message } },
session: { stateSchema, resources, clientData },
})({ id: "default" });

That gives you: streaming over SSE with resume, conversation history, tool loops, atomic state operations, typed clientData to the client, error recovery, and lifecycle hooks. From that one definition.

What you get

Four block primitives

Every piece of logic — calling an LLM, validating input, choosing a path, composing a pipeline — is one of exactly four block kinds:

BlockWhat it doesWhen to reach for it
HandlerPure logic: validate, transform, mutate stateData processing, state updates, tool implementations
GeneratorLLM calls with managed tool loops and streamingChat, extraction, any AI generation
SequencerCompose blocks into pipelinesMulti-step workflows with branching, parallelism, error recovery
RouterDispatch to different pipelines at runtimeMode switching, intent routing, conditional flows

All blocks share the same contract: block.run(input, ctx). Any block composes with any other block — and any block or sequence of blocks can be used as a tool. That means a single tool call can trigger a handler, a multi-step sequencer pipeline, or even a router that dispatches to different strategies. Your AI's tools can be as simple or as sophisticated as any other part of your workflow.

Flows are full APIs

Define a flow, register it with the server, and you have a complete REST API — action execution, session management, SSE streaming, state snapshots — with zero route wiring. Every flow you register becomes instantly callable from any client:

POST /api/flows/my-app/actions/chat          → Execute an action
GET /api/flows/my-app/requests/:id/stream → Stream results via SSE
GET /api/flows/sessions/:id/state → State snapshot with clientData

Multiple flows can coexist in the same server. Each one is self-contained with its own actions, state, and resources.

Resumable streaming

Items stream over SSE as blocks execute — messages, reasoning, tool calls, state changes, custom components. Every event has a sequence number. Disconnect mid-response? Reconnect with a cursor and pick up exactly where you left off. No data loss. No duplicates. No manual SSE plumbing.

Scoped state that scales

Four isolation levels with atomic operations:

ScopeLifetimeExample
RequestSingle action executionTemporary processing data
SessionAcross requests in a conversationChat history, mode, counters
UserAcross sessions for a userPreferences, accumulated knowledge
ProjectShared across usersConfiguration, global data

Each block declares only the state fields it needs via partial schemas. A counter block doesn't need to know about a preferences block's state.

Resources: hybrid memory and filesystem

Resources are more than key-value stores. Each resource combines rich text content with structured atomic state — like a file that carries metadata. An artifact resource can hold a document's full text alongside its title, tags, and timestamps, all in one typed container with atomic operations. Scoped to sessions, users, or projects, resources give your AI a persistent, typed workspace.

clientData entries are derived values computed from state and resources — the mechanism for exposing data to clients. You can't accidentally leak internal state because clientData is the sole data gateway.

Built for an ecosystem

Blocks and flows are portable by design. A tool block, a validation handler, a complete agentic workflow — each is a self-contained unit with typed inputs, outputs, and declared state dependencies. Share them across projects or publish them as packages. The uniform block contract means community blocks compose with yours without adapters or glue code.

Full-stack type safety

Define a Zod schema once. It validates at runtime, infers at compile time, and flows from server blocks through the client SDK to React hooks. One type system. Zero glue code. No code generation step.

The full stack

PackageWhat it does
@flow-state-dev/coreBlock builders, flow definitions, type contracts
@flow-state-dev/serverExecution runtime, stores, SSE streaming, HTTP routes
@flow-state-dev/clientIsomorphic API client — works in Node, browser, edge
@flow-state-dev/reactReact hooks and renderers — wraps client, no transport logic
@flow-state-dev/testingDeterministic test harnesses with generator mocks

Next steps

  • Quick Start — Build a streaming chat app in 5 minutes
  • Blocks — Deep dive into the four primitives
  • Building a Chat App — Complete walkthrough from blocks to React UI to tests