Skip to main content

Full-stack Agentic Superpowers

flow-state.dev gives you composable primitives for AI orchestration, streaming, state, and error handling — so you can explore new patterns instead of reinventing infrastructure.

Primitives

Everything you need, nothing you don't

Four block kinds, composable flows, and scoped state. Each piece works alone or together.

01
Four primitives. Compose freely.

Handler, generator, sequencer, router. The sequencer DSL alone gives you parallel steps, forEach, doUntil/doWhile loops, background work, branching, error recovery, and more.

02
Flows are full APIs.

Define a flow and you have REST endpoints, SSE streaming, session management, and state snapshots. No route wiring.

03
Hybrid memory + filesystem.

Resources combine rich text with structured state — like files that carry metadata. Scoped to sessions, users, or projects.

04
Built for an ecosystem.

Blocks are portable. Share a tool, a handler, or an entire flow. Community blocks compose with yours out of the box.

05
Streaming that just works.

Messages, components, status updates — all stream over SSE as blocks execute. Disconnect mid-response? Reconnect with a sequence cursor.

06
Type-safe, end to end.

One Zod schema flows from server blocks through client SDK to React hooks. No glue code. No type drift.

From definition to UI in four steps

Define blocks. Register flows. Wire up React. Write deterministic tests. Each layer is independent and composable.

import { defineFlow, generator, sequencer } from "@flow-state-dev/core";
// Reusable blocks — yours, your team's, or from the ecosystem
import { searchWeb, searchInternalDocs, searchMemory } from "@research/blocks";
import { mergeAndRank, refineResults } from "@research/ranking";
import { gatherEvidence, scoreFindings } from "@research/analysis";
import { logAnalytics, fallbackSearch } from "@infra/blocks";

// A tool that's a full pipeline — parallel search, iterative refinement, recovery
const deepResearch = sequencer({ name: "deep-research" })
.then(parseQuery)
.parallel({ // fan out to three sources at once
web: searchWeb,
docs: searchInternalDocs,
memory: searchMemory,
}, { maxConcurrency: 3 })
.then(mergeAndRank)
.doUntil( // loop until quality threshold met
(result) => result.confidence > 0.9,
refineResults
)
.work(logAnalytics) // async — doesn't block the pipeline
.rescue([{ when: [SearchError], block: fallbackSearch }]);

// Sequencer that analyzes and emits a component item to the UI
const analyze = sequencer({ name: "analyze" })
.then(gatherEvidence)
.then(scoreFindings)
.tap((report, ctx) => { // emit without changing the payload
ctx.emitComponent("report-card", {
title: report.title,
findings: report.findings,
confidence: report.score,
}).done();
});

const agent = generator({
name: "agent",
model: "gpt-5-mini",
prompt: "You are a research assistant.",
// Blocks can declare only the state they need
sessionStateSchema: z.object({ researchCount: z.number().default(0) }),
history: (_input, ctx) => ctx.session.items.llm(),
user: (input) => input.message,
tools: [deepResearch, analyze, readDoc, writeDoc],
});

export default defineFlow({
kind: "research-assistant",
actions: {
chat: {
inputSchema: z.object({ message: z.string() }),
block: agent,
userMessage: (i) => i.message,
},
},
session: {
stateSchema,
resources: { docs: docResource },
clientData: {
docList: (ctx) => /* derived view from ctx.state + ctx.resources */,
},
},
})({ id: "default" });

Compose blocks into flows with typed state, resources, and client data.

Ready to explore?

Get a streaming AI app running in minutes. Then push it somewhere no framework has gone before.