Smart Node / Self-hosted AI Server
private in progress Active
Python Docker PHP PostgreSQL

The idea

Imagine an empty Linux server.

You install Smart Node. From there you write to it the way you'd brief a new hire — "Here's our daily routine. Here are our customers. Here's how we draft proposals." It builds the workflows, connects to your tools, and from then on operates as part of your team — running on your hardware, under your control, around the clock.

Today, getting to that picture takes engineering to set up. The architecture is built so the trajectory points the other direction: empty server in, working AI department out, configured by conversation.

Underneath, Smart Node is open-source infrastructure you install on your own server, designed to sit between your business operations and whichever AI model providers you choose to use. Think of it as the next layer of small-business infrastructure. Twenty years ago a small business installed a mail server. Fifteen years ago, a file server. Ten years ago, a CRM server. Now, an AI server. Smart Node is one.

Why this matters

A small business that wants AI agents working inside their operations has a few paths forward, each with its own trade-offs:

  1. Use raw provider APIs and write the orchestration yourself. Every request goes directly from your code to OpenAI, Anthropic, Google, or your local Ollama. This works — but eventually you discover you need workflow management, audit logs, memory across sessions, role isolation, approval gates, and error handling. Each becomes a project. Most small businesses don't have the engineering bandwidth.
  1. Use a managed agent SaaS. Faster to start. The trade-off: your orchestration logic, agent definitions, audit trails, and customer interaction history live on the provider's servers. Switching to a different stack later means re-implementing everything. Pricing is typically per-seat, monthly.
  1. Wait for the right tool to appear. AI capability advances meanwhile, and the gap to competitors who do figure it out grows.

Smart Node offers a fourth path: own the orchestration layer. Smart Node sits between your business operations and whichever LLM you call. The models themselves — whether OpenAI, Anthropic, Google, a local Ollama, or anything else — are accessed through the Gateway component, with Smart Node as the surrounding infrastructure: agent definitions, multi-step workflows, memory, document storage, audit log, approval planner, and operator dashboard. The result is a usable, observable, production-grade AI server inside your network, where you choose the models and remain free to change that choice as the landscape evolves.

Components

Smart Node is several independent open-source projects that run together on one server. Each project is useful on its own — you can install just one and benefit from it. They don't call each other directly. Instead, they share data through common stores (memory, documents, events) and two common interfaces: one for talking to external AI models (the gateway), one for talking to humans (the channels layer).

The projects, from the foundational interfaces up to the integration shell that ties them together:

External Model Interface — AI Gateway. The single point through which the entire system talks to external LLMs (local models like Ollama, or cloud providers like OpenAI, Anthropic, Google). One controlled egress = one place for authentication, audit, rate limiting, provider switching, cost tracking, and content screening.

Communication Channels — Mail / Messengers. The inbound and outbound interface to humans. Every incoming email is captured with a unique identifier and registered. Every reply, follow-up, or referenced thread is traceable end-to-end. Email channel is in production today; chat platforms, messengers, and voice are planned extensions.

Task Tracking and Approval — Planner. Every task has a status, an owner, dependencies, an audit trail, and (where required) a human-approval gate. The planner is the place a human looks to see what's in flight, what's blocked, and what needs sign-off.

Documentation System — Doci. Versioned document store. Every document has a stable identifier, version history, tags, metadata. Both humans and agents read and edit through the same interface — never out of sync.

Semantic Memory — Mesh. Self-hosted semantic memory. Saves notes, decisions, worklogs and finds them by meaning rather than keywords. PostgreSQL + pgvector under the hood. Multi-tenant via Row-Level Security. Per-agent isolated workspaces with weighted multi-workspace search — let an agent be 70% engineer, 20% sysadmin, 10% security with a single configuration. Open source under MIT.

Workflow Orchestration — Rein. Multi-agent workflows defined in plain text (YAML for the workflow, Markdown for each specialist's role). Provider-agnostic — calls the AI Gateway. Crash-recovery built in: each workflow gets its own state store, so reboots don't lose progress. Conditional branching, revision loops, multi-agent execution. Open source under MIT.

Virtual Desktop Isolation — Screenbox. When an agent needs to interact with browser-based dashboards, third-party SaaS, or any web application — it does so inside an isolated virtual desktop with a real Chromium browser. Not on the host system. Container isolation, per-agent ownership, snapshots for rollback. Open source under AGPL-3.0.

Console / Worker Agent — Flint. The on-demand agent process that bridges the filesystem and the rest of the system. Reads files, writes files, runs shell commands, calls LLMs through the Gateway, reads/writes memory and documents. Multiple Flint instances can run as different agent identities — one is typically configured as the administrator agent, the receptionist that parses inbound user requests and routes them to the appropriate specialist agent. Built-in safety primitives cover prompt-injection detection, persona hijacking, filesystem access, and encrypted credential storage.

Integration Shell — Smart Node Core. The single piece of software that turns the components above into a deployable system. Web administrative interface for system operators (status, agents, services, projects, scheduled jobs, live event stream); REST API; immutable event log table; configuration and authorisation layer; deployment tooling. Without it, the components are a collection of independent open-source projects. With it, they are a Smart Node installation.

How a request flows

An authorized user sends a request through an inbound channel — typically email to the administrator agent's mailbox. At the next scheduled tick, the administrator agent parses the request. If a task needs to be created, it is recorded in the planner and routed according to predefined workflows in the orchestrator to the appropriate specialist agent. That specialist decomposes the task and assigns subtasks to subordinate agents, again per workflow definitions. Every action — every parse, every route, every external LLM call, every file write — is recorded in the event log. As work progresses, agents update task status. When the work is complete, the administrator agent sends results back to the user through the same channel — or, if the workflow requires human approval, sends an approval request instead.

Where this is going — federation

A single Smart Node sits in one company's network. The interesting future is multiple Smart Nodes connected to each other. A marketing agency's Smart Node and its client's Smart Node sharing specific workspaces. A holding company's Smart Node coordinating with each subsidiary's Smart Node. A small developer's Smart Node connecting to a partner's Smart Node for cross-organization work — without either side handing data to a third party.

Most existing AI agent frameworks are single-instance — they assume one organization, one deployment, one network. Smart Node is designed peer-to-peer from the start. Each instance is a node, not a hub. The future product is not "one Smart Node for one business" but "a federated network of Smart Nodes that talk to each other on owners' terms."

Who is this for

Indie hackers, freelancers, and small-to-mid-sized businesses who want AI infrastructure they own. Not enterprise software for organisations of 5,000+ — designed for teams of 1 to 500. One person or a small-to-mid-sized team that wants agents handling research, content, administration, and customer support while they focus on decisions.

A freelance developer who runs multiple client projects and needs agents to remember context across all of them. A solo founder who wants AI handling customer support, content publishing, and market research without rebuilding the orchestration each quarter. A small agency where three people do the work of ten because AI agents handle the repetitive parts.

The key word is own. Your data stays on your server. Your workflows are yours to modify. Your model choices are yours to change.

Availability

Published open source — usable independently today:

  • Mesh — semantic memory (MIT)
  • Rein — workflow orchestrator (MIT)
  • Screenbox — virtual desktops (AGPL-3.0)

In active development — additional components (worker agent, planner, documentation system, communication channels, AI gateway, and the integration shell that ties everything together) are being finalised for open-source release alongside the integrated Smart Node experience.

FAQ

Is Smart Node a single product I can download today? Not as one unified package yet. Three components are published as standalone open-source projects (linked above) and are usable on their own today. The remaining components and the integration shell are in active development and will be released over the coming months.

Do I need all the components? No. Start with one of the published components and add more as needed. They are designed to be useful independently and better together.

What does "self-hosted" mean here? Everything except external LLM API calls runs on your own server. No SaaS subscription for the orchestration layer, no data leaving your machine for the parts you control. Docker containers on a VPS, a home server, or a laptop.

What LLM providers work? Smart Node is provider-agnostic by design. Claude, GPT, Gemini, Ollama (local), OpenRouter (100+ models), and anything else with an API. The AI Gateway is the single egress point, so swapping providers does not require rewriting your workflows or agents. Use different models for different tasks: small models for routine work, larger models for complex reasoning.

How is this different from managed AI agent platforms? Smart Node is infrastructure you operate. Your orchestration logic, agent definitions, audit trails, and customer interaction history live on your server. You can audit every external request, change providers without re-implementing your workflows, and avoid per-seat pricing. The trade-off is that you operate the server — there is no managed cloud option (yet).

See Also