Skip to content

Develop with AI

BRRR documentation is optimised for AI-assisted development. You can load the full docs into any AI coding assistant — Claude, ChatGPT, Cursor, GitHub Copilot, or any LLM that accepts context — and get accurate, working code for your integration without manually reading every page.

llms.txt

BRRR publishes machine-readable versions of its documentation following the llms.txt standard:

FileContentsBest for
/llms.txtIndex of all documentation pages with short descriptionsLightweight context loading
/llms-full.txtFull text of every documentation page concatenatedComplete context for complex integrations

Loading docs into your AI assistant

Cursor / GitHub Copilot

Add https://docs.brrr.network/llms-full.txt as a documentation source in your IDE settings, or paste the URL directly into a chat:

@docs https://docs.brrr.network/llms-full.txt

Claude

Paste the following into your Claude conversation or system prompt:

Read the BRRR developer documentation at https://docs.brrr.network/llms-full.txt before answering.

You can also add BRRR docs as a persistent source in Claude's MCP settings so it loads automatically in every conversation.

ChatGPT / any web-based assistant

Open https://docs.brrr.network/llms-full.txt in your browser, copy the text, and paste it into the conversation before your question.

OpenAPI schema

BRRR also publishes its REST APIs as OpenAPI schemas:

SchemaCovers
/openapi.jsonAPIs
/agentic-openapi.jsonPayments for AI Agents API

If your coding agent or API tool can ingest structured schemas, feed it the schema that matches the surface you are building against alongside llms-full.txt. Use llms-full.txt for guides and examples, openapi.json for the APIs, and agentic-openapi.json for exact Payments for AI Agents endpoints, parameters, and response models.

Sample system prompt

Use this system prompt to get accurate BRRR integration code from any LLM:

You are a developer integrating the BRRR API and SDK.

API:
- Base URL: https://api.brrr.network
- Authentication: X-Api-Key header
- Terminal settlement states: FINISHED (success) or ERROR (failed)
- Settlement flow: CREATED → CONFIRMED → PROCESSING → SENT → FINISHED

SDK:
- Package: @holyheld/sdk (npm)
- EVM: uses Viem publicClient + walletClient (not ethers.js or wagmi)
- Solana: uses @solana/web3.js Connection + Wallet adapter
- Off-ramp method: holyheldSDK.evm.offRamp.topup(...)
- On-ramp method: holyheldSDK.evm.onRamp.requestOnRamp(...)

Documentation: https://docs.brrr.network/llms-full.txt
OpenAPI schema: https://docs.brrr.network/openapi.json

Always use environment variables for API keys. Never expose production keys in frontend code.

Building AI agents with Payments for AI Agents

The Payments for AI Agents API is purpose-built for AI systems. An agent can autonomously check balance, retrieve card details, and top up a Holyheld card using a small dedicated API surface — no SDK required.

When building an agent with Claude, the fastest path is the MCP server guide. It produces a working Claude Desktop integration in under 30 minutes.

Agentic system prompt

Use this focused system prompt when asking an LLM to build or extend a Holyheld agent:

You are building an AI agent that manages a Holyheld card balance using the Payments for AI Agents API.

API:
- Base URL: https://apicore.holyheld.com/v4/ai-agents
- Authentication: Authorization: Bearer <token> (not X-Api-Key)
- Endpoints:
  GET  /balance          → returns { payload: { balance: "42.00" } } (balance is a string, not a number)
  GET  /card-data        → returns cardNumber, expirationDate, cardholderName, CVV, billingAddress
  POST /topup-request    → body: { amount: "50.00" } (string, max 2 decimal places, no currency symbol)

Top-up is asynchronous:
- A 200 response means the request was accepted, not that the balance has updated
- Poll GET /balance every 30 seconds for up to 5 minutes after a top-up
- Record the balance before the top-up and compare; stop polling when balanceNow > balanceBefore

Card data handling:
- Only call GET /card-data when actively completing a checkout flow
- Never log or persist full card details unless absolutely required

Error codes the agent must handle:
- AI_TOPUP_INSUFFICIENT_BALANCE (500): User does not have enough available balance on the Holyheld main account. Notify the user; do NOT retry.
- AI_TOPUP_LIMIT_EXCEEDED (500): Cumulative spending limit reached. Notify the user; do NOT retry. Only the user can reset the limit from the Holyheld dashboard.
- AI_AUTHORIZATION_INVALID (401/403): Token invalid or missing. Check the bearer token.
- WRONG_REQUEST (400): Amount format is wrong. Fix and retry immediately.
- INTERNAL_SERVER_ERROR (500): Transient. Retry with exponential backoff (max 3×).

Amount formatting: always use amount.toFixed(2) to convert a number to a valid amount string.

Documentation: https://docs.brrr.network/llms-full.txt
OpenAPI schema: https://docs.brrr.network/agentic-openapi.json

MCP integration with Claude Desktop

To expose these endpoints as native Claude tools, follow the Build an MCP Server guide. The resulting server registers tools for balance lookup, card-data retrieval, and top-up that Claude can call without any additional prompting.

Tips for better results

  • Be specific about your stack. Mention whether you are using TypeScript or Python, and confirm you are using Viem for EVM wallet interactions.
  • Reference the target environment clearly. Ask the LLM to keep API keys in environment variables and avoid embedding credentials in examples.
  • Ask for error handling. Explicitly request that the LLM include error handling and reference the SDK's HolyheldSDKErrorCode enum.
  • Use the Go-Live Checklist. Before shipping, paste the checklist into your AI assistant and ask it to audit your implementation against each item.