Reference

Documentation

Learn how to integrate the Fornax AI platform into your infrastructure.


Getting Started

Fornax acts as a high-performance proxy layer in front of your self-hosted LLMs, bridging isolated local compute and structured developer applications.

1. Registration
Register a developer account using your company email. The system issues an encrypted x-api-key.
2. Hardware Setup
Ensure your Ollama instance runs locally on port 11434. Fornax routes and buffers requests to prevent timeouts.

Authentication

All endpoints are protected using header-based auth. Missing or invalid keys receive an instant 401 rejection.

// Required Headers
x-api-key: d9f2...8cx1
Content-Type: application/json

Inference API /agent

The core intelligence router. Send prompts via POST and Fornax maps them to the backing inference engine while logging metadata to PostgreSQL.

{
  "prompt": "Write a python function to compute fibonacci."
}