Skip to main content
Vadim Nicolai
Senior Software Engineer at Vitrifi
View all authors

Building Long-Running TTS Pipelines with LangGraph: Orchestrating Long-Form Audio Generation

· 17 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Generating long-form audio content—audiobooks spanning hours, educational courses, or extended podcasts—presents unique challenges: API rate limits, network failures, resource constraints, and the sheer duration of processing. This article explores a production-ready architecture for long-running TTS pipelines that can gracefully handle long-form generation tasks, resume after failures, and maintain state across distributed systems.

Built with LangGraph, the system orchestrates complex workflows involving AI content generation (DeepSeek), text-to-speech conversion (OpenAI TTS), and distributed storage (Cloudflare R2). The key innovation: PostgreSQL checkpointing enables resumable execution, making it possible to generate 5-30+ minute audio segments reliably, even when individual API calls or processing steps fail.

The Challenge: Long-Form Audio at Scale

Why Long-Running Pipelines Are Hard

Traditional TTS approaches fail at scale:

  1. Time Constraints: A 30-minute audio narrative requires ~4,500 words, chunked into 10-15 API calls, taking 2-5 minutes to generate
  2. Failure Points: Each step (text generation, chunking, TTS, storage) can fail independently
  3. Memory Pressure: Holding all audio segments in memory for hours is impractical
  4. Cost Management: Retrying from scratch wastes API credits and compute time
  5. State Loss: Without persistence, crashes mean starting over

Our Solution: Stateful Orchestration

  • LangGraph manages workflow state transitions
  • PostgreSQL persists checkpoints after each successful step
  • R2 provides durable storage for completed segments
  • Resumable execution using thread_id for job recovery

System Overview

The pipeline orchestrates three main workflows:

  1. Research Generation: Structured content research using DeepSeek
  2. Narrative Text Generation: Long-form content creation with context awareness
  3. Audio Synthesis: Text-to-speech conversion with OpenAI TTS and Cloudflare R2 storage

Tech Stack

  • LangGraph: State machine orchestration with built-in checkpointing
  • DeepSeek: Long-form text generation (deepseek-chat, 2500+ token outputs)
  • OpenAI TTS: Streaming audio synthesis (gpt-4o-mini-tts, 4096 char limit per request)
  • PostgreSQL: Durable checkpointing for long-running jobs (Neon serverless for production)
  • Cloudflare R2: S3-compatible storage with zero egress fees (critical for multi-GB audio)
  • FastAPI: Async REST API for non-blocking long operations
  • Docker: Containerized deployment with ffmpeg for audio merging

Why This Stack for Long-Running Jobs:

  • Postgres checkpointing: Resume from any point in the workflow (text generation → chunking → TTS → upload)
  • Streaming TTS: Memory-efficient direct-to-disk writes (no buffering entire audio in RAM)
  • R2 durability: Segments uploaded immediately, survive process crashes
  • Async execution: Non-blocking background processing for hours-long jobs

Architecture Patterns

1. Core LangGraph State Machine

The system implements three distinct LangGraph workflows, each optimized for specific tasks.

2. Research Generation Pipeline

The research pipeline generates structured research content using a focused LangGraph workflow.

Key Features:

  • Low temperature (0.3) for factual accuracy
  • Structured JSON output with validation
  • Evidence level classification (A/B/C)
  • Relevance scoring for topic matching

3. Long-Form Text Generation Pipeline

The most sophisticated workflow, supporting both full generation and audio-only modes.

Conditional Routing Logic:

def should_skip_text_generation(state: TextState) -> str:
"""Route to text generation or skip to audio."""
if state.get("existing_content") and state["existing_content"].get("text"):
return "chunk_text" # Audio-only mode
return "generate_text" # Full generation

def should_generate_audio(state: TextState) -> str:
"""Route to audio generation or end."""
if state.get("generate_audio", True):
return "chunk_text"
return END # Text-only mode

4. Audio Generation Pipeline (Standalone)

A simplified pipeline for generic long-form narration.

Iterative Chunk Processing:

The system uses a recursive edge pattern for processing chunks:

g.add_conditional_edges(
"tts_one_chunk",
edge_should_continue,
{
"tts_one_chunk": "tts_one_chunk", # Loop back
"finalize": "finalize", # Exit loop
},
)

def edge_should_continue(state: JobState) -> str:
if state["chunk_index"] < len(state["chunks"]):
return "tts_one_chunk"
return "finalize"

Deep Dive: Key Architectural Components

State Management

LangGraph uses typed state dictionaries for type safety and IDE support:

class TextState(TypedDict):
# Input metadata
content_id: int
title: str
content_type: str
language: str
target_duration_minutes: int | None

# Generation data
research_items: list[dict]
existing_content: dict | None
generated_text: str | None

# TTS fields
voice: str
chunks: List[str]
segment_urls: List[str]
manifest_url: Optional[str]
audio_url: Optional[str]

# Control flow
generate_audio: bool
database_saved: bool
error: str | None

Postgres Checkpointing: The Key to Long-Running Resilience

For long-running jobs, checkpointing is non-negotiable. Without it, a network glitch at minute 25 of a 30-minute generation means restarting from scratch.

How Checkpointing Works:

async def run_pipeline(state: TextState, thread_id: str):
db_url = os.getenv("DATABASE_URL")

async with AsyncPostgresSaver.from_conn_string(db_url) as checkpointer:
await checkpointer.setup() # Creates checkpoint tables
app = build_graph(checkpointer=checkpointer)
config = {"configurable": {"thread_id": thread_id}}

# LangGraph automatically saves state after each node execution
final_state = await app.ainvoke(state, config=config)
return final_state

What Gets Checkpointed:

  • Complete state dictionary after each node
  • Edge transitions and routing decisions
  • Timestamps and execution metadata
  • Partial results (generated text, uploaded segment URLs)

Recovery Example:

# Job crashes after generating 8 of 12 TTS segments
# Resume with same thread_id:
final_state = await run_pipeline(initial_state, thread_id="job-12345")

# LangGraph:
# 1. Loads last checkpoint from Postgres
# 2. Sees 8 segments already uploaded to R2
# 3. Continues from segment 9
# 4. Completes remaining 4 segments

Production Benefits:

  • Cost Savings: No wasted API calls on retry
  • Time Efficiency: Resume from 80% complete, not 0%
  • Reliability: Transient failures (rate limits, timeouts) don't kill long-form jobs
  • Observability: Query checkpoint table to monitor progress
  • Parallel Execution: Multiple jobs with different thread_id values

Text Chunking Algorithm: Optimizing for Long-Form Narration

For 30-minute audio (4,500+ words), naive chunking creates jarring transitions. Our algorithm balances API constraints with narrative flow:

Constraints:

  • OpenAI TTS: 4,096 character limit per request
  • Target: ~4,000 chars per chunk (safety margin)
  • Goal: Natural pauses at paragraph/sentence boundaries

Strategy:

def chunk_text(text: str, max_chars: int = 4000) -> List[str]:
"""
Multi-level chunking for long-form content:

1. Split by paragraphs (\n\n) - natural topic boundaries
2. Accumulate paragraphs until approaching 4K limit
3. If single paragraph > 4K, split by sentences
4. If single sentence > 4K, split mid-sentence (rare edge case)

Result: 10-15 chunks for 30-min audio, each ending at natural pause
"""
paragraphs = [p.strip() for p in text.split("\n\n") if p.strip()]
chunks = []
buf = []

for p in paragraphs:
candidate = "\n\n".join(buf + [p]) if buf else p
if len(candidate) <= max_chars:
buf.append(p)
else:
if not buf:
# Paragraph too large - split by sentences
sentences = re.split(r"(?<=[.!?])\s+", p)
# Accumulate sentences with same logic...
else:
chunks.append("\n\n".join(buf))
buf = [p]

return chunks

Why This Matters for Long-Form:

  • Seamless Merging: Chunk boundaries at natural pauses prevent audio glitches
  • Even Distribution: Avoids tiny final chunks (better for progress tracking)
  • Memory Efficiency: Process one chunk at a time, not entire 4,500-word text
  • Resumability: Each chunk is independent; can resume mid-sequence

OpenAI TTS Streaming

Efficient audio generation using streaming responses:

async def node_tts_one_chunk(state: JobState) -> JobState:
chunk_text = state["chunks"][state["chunk_index"]]
segment_path = f"segment_{state['chunk_index']:04d}.mp3"

client = OpenAI()

# Stream directly to disk (memory efficient)
with client.audio.speech.with_streaming_response.create(
model="gpt-4o-mini-tts",
voice=state["voice"],
input=chunk_text,
response_format="mp3",
) as response:
response.stream_to_file(segment_path)

# Upload to R2
r2_url = upload_to_r2(segment_path, state["job_id"])

return {
**state,
"segment_urls": [*state["segment_urls"], r2_url],
"chunk_index": state["chunk_index"] + 1,
}

Audio Merging Strategy

The system uses ffmpeg for high-quality concatenation:

async def node_generate_audio(state: TextState) -> TextState:
# Generate all segments...

# Create concat list for ffmpeg
file_list_path.write_text(
"\n".join(f"file '{segment}'" for segment in segment_paths)
)

# Merge using ffmpeg (codec copy - fast and lossless)
subprocess.run([
"ffmpeg", "-f", "concat", "-safe", "0",
"-i", str(file_list_path),
"-c", "copy", # No re-encoding
str(merged_path)
])

# Fallback to binary concatenation if ffmpeg unavailable
if not merged_path.exists():
with open(merged_path, "wb") as merged:
for segment in segment_paths:
merged.write(segment.read_bytes())

Cloudflare R2 Integration

S3-compatible storage for globally distributed audio:

def get_r2_client():
return boto3.client(
's3',
endpoint_url=f'https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.com',
aws_access_key_id=R2_ACCESS_KEY_ID,
aws_secret_access_key=R2_SECRET_ACCESS_KEY,
config=Config(signature_version='s3v4'),
)

def upload_to_r2(file_path: Path, job_id: str) -> str:
key = f"{job_id}/{file_path.name}"

client.put_object(
Bucket=R2_BUCKET_NAME,
Key=key,
Body=file_path.read_bytes(),
ContentType='audio/mpeg',
)

return f"{R2_PUBLIC_DOMAIN}/{key}"

Structured Content Generation

Narrative Architecture Framework

The system implements a flexible content framework with customizable sections:

Key Components:

  1. Introduction (2-3 min): Hook the listener and set expectations
  2. Context: Background information and relevance
  3. Core Content: Main topic introduction with clear structure
  4. Examples: Concrete illustrations and case studies
  5. Deep Dive: Detailed exploration of key concepts
  6. Applications: Practical use cases and implementation
  7. Advanced Topics: Nuanced discussion for engaged learners
  8. Synthesis: Connect all concepts together
  9. Takeaways: Summary of key points
  10. Conclusion: Clear closing and next steps

Dynamic Content Adaptation

def build_content_prompt(state: TextState) -> str:
minutes = state.get("target_duration_minutes") or 5
target_words = int(minutes * 150) # 150 words per minute narration

content_type = state.get("content_type")

# Select architecture based on content type
architecture = generate_content_architecture(content_type)

return f"""
Create a {state['language']} narrative for audio:

TOPIC: {state['title']}
TYPE: {content_type}
TARGET: {target_words} words ({minutes} minutes)

{architecture}

RESEARCH CONTEXT:
{format_research_summary(state['research_items'])}

Requirements:
- Plain text only (no markdown)
- Natural paragraph breaks
- Engaging, clear tone
- Appropriate language for audio listening
"""

API Endpoints

FastAPI Service Layer

Endpoint Implementations:

@app.post("/api/research/generate")
async def research_endpoint(req: ResearchRequest):
"""Generate research context using LangGraph + DeepSeek."""
return await generate_research(req)

@app.post("/api/text/generate")
async def text_endpoint(req: TextGenerationRequest):
"""Generate long-form text content (text-only mode)."""
return await generate_text(req)

@app.post("/api/audio/generate")
async def audio_endpoint(req: TextGenerationRequest):
"""Generate audio from existing content (audio-only mode)."""
return await generate_audio(req)

@app.post("/api/tts/generate")
async def tts_endpoint(req: TTSRequest, background_tasks: BackgroundTasks):
"""Generic TTS generation (fire-and-forget)."""
return await generate_tts(req, background_tasks)

Deployment Architecture

Docker Containerization

FROM python:3.12-slim

WORKDIR /app

# Install ffmpeg for audio merging
RUN apt-get update && apt-get install -y ffmpeg && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose port
EXPOSE 8080

# Run FastAPI server
CMD ["uvicorn", "langgraph_server:app", "--host", "0.0.0.0", "--port", "8080"]

Environment Configuration

# AI Services
DEEPSEEK_API_KEY=sk-...
OPENAI_API_KEY=sk-...

# Database (Neon Postgres)
DATABASE_URL=postgresql://user:pass@host/db?sslmode=require

# Cloudflare R2
R2_ACCOUNT_ID=...
R2_ACCESS_KEY_ID=...
R2_SECRET_ACCESS_KEY=...
R2_BUCKET_NAME=longform-tts
R2_PUBLIC_DOMAIN=https://pub-longform-tts.r2.dev

Cloudflare Workers Deployment

# wrangler.toml
name = "langgraph-tts"
compatibility_date = "2024-01-01"

[build]
command = "docker build -t langgraph-tts ."

[[services]]
name = "langgraph-tts"
image = "langgraph-tts:latest"

[env]
PORT = "8080"

[[r2_buckets]]
binding = "LONGFORM_TTS"
bucket_name = "longform-tts"

Production Considerations

Performance Metrics for Long-Running Jobs

Benchmarks (30-minute audio generation):

StageDurationCheckpointedRetryable
Text Generation (DeepSeek)30-60s✅ After completion✅ Full retry
Text Chunking<1s✅ After completion✅ Instant
TTS Segments (1–12)10-20s each✅ After each segment✅ Per-segment
Audio Merging (ffmpeg)1–3s✅ After completion✅ Full retry
R2 Upload (merged)2-5s✅ After completion✅ Full retry
Total Pipeline3-5 minutes15+ checkpointsGranular recovery

Long-Running Job Profile:

# Example: 2-hour audiobook chapter
text_length = 18,000 words
chunks = 45 # ~4,000 chars each
tts_time = 45 * 15s = 11.25 minutes
text_gen_time = 2-3 minutes
total_time = ~15 minutes for 2-hour audio

# Checkpoint frequency:
# - 1 after text generation
# - 45 after each TTS segment
# - 1 after merge
# Total: 47 recovery points

Failure Recovery Times:

  • Crash at 80% complete → Resume in 1–2 seconds, continue from segment 36/45
  • Network timeout on segment 20 → Retry only segment 20, not segments 1–19
  • Database connection loss → Reconnect and load last checkpoint (<500ms)

Error Handling & Resilience

async def node_generate_text(state: TextState) -> TextState:
try:
llm = ChatDeepSeek(model="deepseek-chat", temperature=0.7, max_tokens=2500)
prompt = build_therapeutic_prompt(state)

resp = await llm.ainvoke([HumanMessage(content=prompt)])
text = clean_for_tts(resp.content)

return {**state, "generated_text": text, "error": None}
except Exception as e:
print(f"❌ Text generation failed: {e}")
return {**state, "error": str(e)}

Monitoring & Observability

Key metrics to track:

  1. Generation Metrics:

    • Text generation latency (DeepSeek)
    • TTS latency per chunk (OpenAI)
    • Total pipeline duration
  2. Quality Metrics:

    • Text length vs target duration
    • Chunk count and size distribution
    • Audio segment file sizes
  3. Infrastructure Metrics:

    • R2 upload success rate
    • Database checkpoint writes
    • ffmpeg merge success rate
  4. Cost Metrics:

    • DeepSeek token usage
    • OpenAI TTS character count
    • R2 storage and bandwidth

Scaling Patterns

Horizontal Scaling:

  • FastAPI instances behind load balancer
  • Stateless design (state in Postgres)
  • R2 for distributed storage

Batch Processing:

async def batch_generate_audio(goal_ids: List[int]):
"""Process multiple goals in parallel."""
tasks = [run_pipeline(build_state(id), f"batch-{id}") for id in goal_ids]
results = await asyncio.gather(*tasks, return_exceptions=True)
return results

Queue-Based Processing:

  • Use background tasks for long-running jobs
  • Celery/Redis for distributed task queue
  • Webhook callbacks for completion notifications

Performance Optimization

Chunking Optimization

# Optimize chunk size for TTS quality vs API limits
OPTIMAL_CHUNK_SIZE = 3000 # Sweet spot for natural pauses

# Parallel TTS generation (with rate limiting)
async def parallel_tts_generation(chunks: List[str], max_concurrent: int = 3):
semaphore = asyncio.Semaphore(max_concurrent)

async def generate_with_limit(chunk, index):
async with semaphore:
return await generate_tts_segment(chunk, index)

tasks = [generate_with_limit(c, i) for i, c in enumerate(chunks)]
return await asyncio.gather(*tasks)

Caching Strategy

# Cache research results for similar goals
@lru_cache(maxsize=100)
def get_research_for_goal_type(therapeutic_type: str, age: int):
"""Cache research by type + age bracket."""
return fetch_research(therapeutic_type, age)

# Cache text generation for re-use
async def get_or_generate_text(goal_id: int):
existing = await db.fetch_story(goal_id)
if existing and existing.created_at > datetime.now() - timedelta(days=7):
return existing.text
return await generate_new_text(goal_id)

Testing Strategy

Unit Tests

def test_chunk_text_respects_limit():
long_text = "word " * 2000
chunks = chunk_text(long_text, max_chars=4000)

for chunk in chunks:
assert len(chunk) <= 4000

def test_clean_for_tts_removes_markdown():
text = "# Title\n\n**bold** and `code`"
cleaned = clean_for_tts(text)
assert "#" not in cleaned
assert "**" not in cleaned
assert "`" not in cleaned

Integration Tests

@pytest.mark.asyncio
async def test_full_pipeline():
state = {
"goal_id": 1,
"goal_title": "Test anxiety reduction",
"therapeutic_goal_type": "anxiety_reduction",
"age": 8,
# ... other fields
}

result = await run_pipeline(state, "test-thread-1")

assert result["generated_text"] is not None
assert len(result["chunks"]) > 0
assert result["audio_url"] is not None
assert result["error"] is None

Lessons Learned

1. State Design is Critical

  • Use TypedDict for type safety
  • Keep state flat (avoid deep nesting)
  • Include metadata for debugging (timestamps, IDs)

2. Checkpoint Strategically

  • Not all workflows need checkpointing
  • Audio-only mode: disable checkpoints to avoid schema issues
  • Use thread_id conventions: {workflow}-{entity_id}-{timestamp}

3. Error Recovery

  • Graceful degradation (segments work even if merge fails)
  • Fallback strategies (binary concat if ffmpeg unavailable)
  • Preserve partial results (individual segments in R2)

4. Cost Management

  • Monitor token usage (DeepSeek is cost-effective at 0.14/0.14/0.28 per 1M tokens)
  • OpenAI TTS: $15 per 1M characters
  • R2 storage: $0.015/GB/month (much cheaper than S3)

5. Content Quality

  • Structured frameworks improve consistency
  • Repetition aids retention and comprehension
  • Audience-appropriate language is crucial for engagement

Future Enhancements

1. Multi-Voice Narratives

# Support character dialogue with different voices
voices = {
"narrator": "cedar",
"child_character": "nova",
"parent_character": "marin"
}

2. Emotion-Adaptive TTS

# Adjust voice parameters based on content emotion
def get_tts_params(text: str) -> dict:
sentiment = analyze_sentiment(text)

if sentiment == "calm":
return {"speed": 0.9, "pitch": 0}
elif sentiment == "energetic":
return {"speed": 1.1, "pitch": 2}

3. Real-Time Streaming

# Stream audio as it's generated (SSE)
async def stream_audio_generation(goal_id: int):
async for chunk_url in generate_audio_stream(goal_id):
yield f"data: {json.dumps({'chunk_url': chunk_url})}\n\n"

4. Multilingual Support

  • Expand beyond Romanian and English
  • Voice selection per language
  • Cultural adaptation of content frameworks

Conclusion

This LangGraph-based TTS architecture demonstrates several key patterns:

  1. Composable Workflows: Three distinct pipelines sharing common components
  2. Conditional Routing: Smart flow control based on state
  3. Durable Execution: PostgreSQL checkpointing for resilience
  4. Streaming Efficiency: Direct-to-disk TTS for memory optimization
  5. Distributed Storage: R2 for globally accessible audio

The system successfully processes 5-30+ minute long-form narratives (up to 7,000+ words), generating research-backed content, converting to high-quality audio, and delivering via CDN—all while maintaining resumability after failures and full observability.

Real-World Performance:

  • 30-minute generation: 12-15 TTS chunks, ~3-5 minutes total processing time
  • Failure recovery: Resume from any checkpoint in <1 second
  • Cost efficiency: 0.020.02-0.07 per 30-minute audio (DeepSeek + OpenAI TTS)
  • Throughput: 10+ concurrent jobs on single instance

Key Takeaways for Long-Running Pipelines:

  • LangGraph + Postgres checkpointing is essential for long-form workflows
  • Streaming TTS to disk prevents memory exhaustion on long generations
  • Smart chunking (4K chars) balances API limits with narrative coherence
  • Immediate R2 uploads ensure partial results survive crashes
  • Async architecture enables fire-and-forget long operations
  • Thread-based recovery makes interrupted jobs trivial to resume

The architecture scales to long-form audio generation: audiobooks (10+ hours), comprehensive courses, documentary narration, or serialized storytelling—any use case where reliability and resumability are non-negotiable.

References


This architecture powers long-form audio generation, combining LangGraph orchestration, OpenAI TTS streaming, and distributed storage for production-ready AI audio systems.

DeFiTuna: On-Chain Limit Orders on Solana

· 15 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR

This guide demonstrates real-world implementation of DeFiTuna limit orders on Solana mainnet, focusing on:

  • Direct RPC Interaction: Building and submitting transactions without high-level SDKs
  • On-Chain Limit Order Storage: How limit order parameters are encoded in account data
  • Position Lifecycle: Open position → Set limits → Close position with actual mainnet transactions
  • Account Derivation: PDA calculations for spot positions and associated token accounts
  • DeFiTuna SDK Patterns: Insights from the official TypeScript/Rust SDK implementation

Live Mainnet Transactions:

Introduction

DeFiTuna combines Orca Whirlpools with on-chain limit orders, enabling automated trading triggers without requiring active monitoring. Unlike traditional AMMs where you passively provide liquidity, DeFiTuna positions execute predefined trades when price conditions are met.

This article walks through the complete implementation—from deriving PDAs to encoding instruction data—using Rust with solana-sdk and insights from the DeFiTuna SDK.

Architecture Overview

DeFiTuna + Orca Integration

Key Insight: DeFiTuna is a Protocol Layer

DeFiTuna doesn't implement its own AMM. Instead, it:

  1. Wraps existing AMMs (Orca Whirlpools, Fusion pools)
  2. Adds limit order logic on top
  3. Manages leveraged positions and lending vaults

On-Chain Account Structure

Position Account Data Layout (346 bytes)

From our mainnet position 8wvKhHXHfzY4eQZTyK4kTfUtGj46XX5UX8P4S5kBbJ5:

Offset   Field                    Type     Bytes   Description
------ ----- ---- ----- -----------
0-8 Discriminator u64 8 Account type identifier
8-40 Authority Pubkey 32 Position owner
40-72 Pool Pubkey 32 Orca Whirlpool address
72-73 Position Token u8 1 0=TokenA, 1=TokenB
73-74 Collateral Token u8 1 0=TokenA, 1=TokenB
92-100 Amount u64 8 Position size
100-108 Borrowed u64 8 Borrowed amount
184-200 Lower Limit √Price u128 16 Buy trigger price
200-216 Upper Limit √Price u128 16 Sell trigger price

Hex dump from mainnet:

00b8:   60 e4 c0 d6 1c 8e 68 01  00 00 00 00 00 00 00 00   Lower limit
00c8: 70 17 34 50 e5 c9 7c 01 00 00 00 00 00 00 00 00 Upper limit

These bytes encode:

  • Lower: 6651068162312125808640 → $130
  • Upper: 7024310870365581606912 → $145

Implementation: Rust Binaries

Project Structure

bots/defituna-bot/
├── Cargo.toml
├── .env # Configuration
├── defituna.json # Program IDL
└── src/
├── config.rs # Shared config
└── bin/
├── open_spot_position.rs # Create position
├── set_limit_orders.rs # Configure triggers
├── check_position.rs # Query state
└── close_position.rs # Cleanup

1. Opening a Spot Position

File: src/bin/open_spot_position.rs

use solana_sdk::{
instruction::{AccountMeta, Instruction},
signature::{Keypair, Signer},
transaction::Transaction,
};
use spl_associated_token_account::get_associated_token_address;

// DeFiTuna program ID (mainnet)
const DEFITUNA_PROGRAM: &str = "tuna4uSQZncNeeiAMKbstuxA9CUkHH6HmC64wgmnogD";

// Orca SOL/USDC Whirlpool
const WHIRLPOOL: &str = "Czfq3xZZDmsdGdUyrNLtRhGc47cXcZtLG4crryfu44zE";

fn main() -> Result<()> {
let program_id = Pubkey::from_str(DEFITUNA_PROGRAM)?;
let whirlpool = Pubkey::from_str(WHIRLPOOL)?;
let authority = executor_keypair.pubkey();

// Derive spot position PDA
let (tuna_spot_position, _bump) = Pubkey::find_program_address(
&[
b"tuna_spot_position",
authority.as_ref(),
whirlpool.as_ref(),
],
&program_id,
);

// Token program IDs
let token_program_a = spl_token::ID; // SOL uses standard program
let token_program_b = spl_token::ID; // USDC uses standard program

// Derive associated token accounts for position
let tuna_position_ata_a = get_associated_token_address(
&tuna_spot_position,
&SOL_MINT
);
let tuna_position_ata_b = get_associated_token_address(
&tuna_spot_position,
&USDC_MINT
);

// Build instruction
// Discriminator from IDL: [87, 208, 173, 48, 231, 62, 210, 220]
let mut data = vec![87, 208, 173, 48, 231, 62, 210, 220];

// Args: position_token (PoolToken::A=0), collateral_token (PoolToken::B=1)
data.push(0); // Trading SOL (position_token = A)
data.push(1); // Collateral is USDC (collateral_token = B)

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new(authority, true), // authority (signer, writable)
AccountMeta::new_readonly(SOL_MINT, false), // mint_a
AccountMeta::new_readonly(USDC_MINT, false), // mint_b
AccountMeta::new_readonly(token_program_a, false), // token_program_a
AccountMeta::new_readonly(token_program_b, false), // token_program_b
AccountMeta::new(tuna_spot_position, false), // tuna_position (writable)
AccountMeta::new(tuna_position_ata_a, false), // tuna_position_ata_a
AccountMeta::new(tuna_position_ata_b, false), // tuna_position_ata_b
AccountMeta::new_readonly(whirlpool, false), // pool (Orca Whirlpool)
AccountMeta::new_readonly(system_program::ID, false), // system_program
AccountMeta::new_readonly(
spl_associated_token_account::ID,
false
), // associated_token_program
],
data,
};

// Create and sign transaction
let recent_blockhash = rpc_client.get_latest_blockhash()?;
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&authority),
&[&executor_keypair],
recent_blockhash,
);

// Send to RPC
let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Position created: {}", signature);

Ok(())
}

Key Points:

  • No collateral required to open position (just creates account structure)
  • Position PDA seeds: ["tuna_spot_position", authority, whirlpool]
  • Instruction creates position account + 2 associated token accounts
  • Cost: ~0.00329904 SOL rent + ~0.000005 SOL gas

2. Setting Limit Orders

File: src/bin/set_limit_orders.rs

/// Convert price to sqrt_price format
/// Formula: sqrt_price = sqrt(price) * 2^64, adjusted for decimals
fn price_to_sqrt_price(price: f64, decimals_a: u8, decimals_b: u8) -> u128 {
let decimal_diff = decimals_a as i32 - decimals_b as i32;
let adjusted_price = if decimal_diff >= 0 {
price * 10_f64.powi(decimal_diff)
} else {
price / 10_f64.powi(-decimal_diff)
};

let sqrt_price_f64 = adjusted_price.sqrt() * (1u128 << 64) as f64;
sqrt_price_f64 as u128
}

fn main() -> Result<()> {
// Derive same position PDA
let (tuna_spot_position, _) = Pubkey::find_program_address(
&[b"tuna_spot_position", authority.as_ref(), whirlpool.as_ref()],
&program_id,
);

// Set limit prices
let lower_price = 130.0; // Buy if SOL drops to $130
let upper_price = 145.0; // Sell if SOL rises to $145

// Convert to sqrt_price (SOL=9 decimals, USDC=6 decimals)
let lower_sqrt_price = price_to_sqrt_price(lower_price, 9, 6);
let upper_sqrt_price = price_to_sqrt_price(upper_price, 9, 6);

// Build instruction
// Discriminator: [10, 180, 19, 205, 169, 133, 52, 118]
let mut data = vec![10, 180, 19, 205, 169, 133, 52, 118];

// Args: lower_limit_order_sqrt_price (u128), upper_limit_order_sqrt_price (u128)
data.extend_from_slice(&lower_sqrt_price.to_le_bytes());
data.extend_from_slice(&upper_sqrt_price.to_le_bytes());

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new_readonly(authority, true), // authority (signer)
AccountMeta::new(tuna_spot_position, false), // tuna_position (writable)
],
data,
};

let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&authority),
&[&executor_keypair],
recent_blockhash,
);

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Limit orders set: {}", signature);

Ok(())
}

Output (mainnet):

Lower (buy): $130 → sqrt_price: 6651068162312125808640
Upper (sell): $145 → sqrt_price: 7024310870365581606912

Verification on-chain:

solana account 8wvKhHXHfzY4eQZTyK4kTfUtGj46XX5UX8P4S5kBbJ5

# Bytes 184-200 (lower limit):
60 e4 c0 d6 1c 8e 68 01 00 00 00 00 00 00 00 00

# Bytes 200-216 (upper limit):
70 17 34 50 e5 c9 7c 01 00 00 00 00 00 00 00 00

3. Closing a Position

File: src/bin/close_position.rs

fn main() -> Result<()> {
// Derive position PDA (same as open)
let (tuna_spot_position, _) = Pubkey::find_program_address(
&[b"tuna_spot_position", authority.as_ref(), whirlpool.as_ref()],
&program_id,
);

// Build close instruction
// Discriminator: [4, 189, 171, 84, 110, 220, 10, 8]
let data = vec![4, 189, 171, 84, 110, 220, 10, 8];

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new(authority, true), // authority
AccountMeta::new_readonly(SOL_MINT, false), // mint_a
AccountMeta::new_readonly(USDC_MINT, false), // mint_b
AccountMeta::new_readonly(spl_token::ID, false), // token_program_a
AccountMeta::new_readonly(spl_token::ID, false), // token_program_b
AccountMeta::new(tuna_spot_position, false), // tuna_position
AccountMeta::new(tuna_position_ata_a, false), // tuna_position_ata_a
AccountMeta::new(tuna_position_ata_b, false), // tuna_position_ata_b
],
data,
};

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Position closed, rent recovered: {}", signature);

Ok(())
}

Result: Rent (0.00329904 SOL) returned to wallet, position account deleted.

DeFiTuna SDK Patterns

TypeScript SDK Structure

From DefiTuna/tuna-sdk repository:

// ts-sdk/client/src/txbuilder/openTunaSpotPosition.ts
export async function openTunaSpotPositionInstructions(
rpc: Rpc<GetAccountInfoApi & GetMultipleAccountsApi>,
authority: TransactionSigner,
poolAddress: Address,
args: OpenTunaSpotPositionInstructionDataArgs,
): Promise<IInstruction[]> {
// 1. Derive position PDA
const tunaPositionAddress = (
await getTunaSpotPositionAddress(authority.address, poolAddress)
)[0];

// 2. Get associated token accounts
const tunaPositionAtaA = (
await findAssociatedTokenPda({
owner: tunaPositionAddress,
mint: mintA.address,
tokenProgram: mintA.programAddress,
})
)[0];

const tunaPositionAtaB = (
await findAssociatedTokenPda({
owner: tunaPositionAddress,
mint: mintB.address,
tokenProgram: mintB.programAddress,
})
)[0];

// 3. Build instruction
return getOpenTunaSpotPositionInstruction({
authority,
mintA: mintA.address,
mintB: mintB.address,
tokenProgramA: mintA.programAddress,
tokenProgramB: mintB.programAddress,
tunaPosition: tunaPositionAddress,
tunaPositionAtaA,
tunaPositionAtaB,
pool: poolAddress,
associatedTokenProgram: ASSOCIATED_TOKEN_PROGRAM_ADDRESS,
...args,
});
}

Rust SDK Patterns

From rust-sdk/client/src/txbuilder/open_tuna_spot_position.rs:

pub fn open_tuna_spot_position_instructions(
rpc: &RpcClient,
authority: &Pubkey,
pool_address: &Pubkey,
args: OpenTunaSpotPositionInstructionArgs,
) -> Result<Vec<Instruction>> {
// 1. Fetch pool account to determine token mints
let pool_account = rpc.get_account(pool_address)?;

// Decode based on program owner
let (mint_a_address, mint_b_address) = if pool_account.owner == FUSIONAMM_ID {
let pool: FusionPool = decode_account(&pool_account)?;
(pool.token_mint_a, pool.token_mint_b)
} else if pool_account.owner == WHIRLPOOL_ID {
let pool: Whirlpool = decode_account(&pool_account)?;
(pool.token_mint_a, pool.token_mint_b)
} else {
return Err(anyhow!("Unsupported pool type"));
};

// 2. Get mint accounts to determine token programs
let mint_accounts = rpc.get_multiple_accounts(&[
mint_a_address.into(),
mint_b_address.into()
])?;

let mint_a_account = mint_accounts[0].as_ref()
.ok_or(anyhow!("Token A mint not found"))?;
let mint_b_account = mint_accounts[1].as_ref()
.ok_or(anyhow!("Token B mint not found"))?;

// 3. Build instruction
Ok(vec![open_tuna_spot_position_instruction(
authority,
pool_address,
&mint_a_address,
&mint_b_address,
&mint_a_account.owner, // Token program
&mint_b_account.owner,
args,
)])
}

Key SDK Features:

  1. Pool Type Detection: Automatically handles Orca vs Fusion pools
  2. Token Program Discovery: Supports both SPL Token and Token-2022
  3. Account Pre-Validation: Checks accounts exist before building transaction
  4. Error Handling: Detailed error messages for debugging

Solana RPC Interaction Patterns

1. Account Fetching with Commitment

use solana_client::rpc_client::RpcClient;
use solana_sdk::commitment_config::CommitmentConfig;

let rpc_client = RpcClient::new_with_commitment(
"https://api.mainnet-beta.solana.com",
CommitmentConfig::confirmed()
);

// Fetch account with specific commitment
let account = rpc_client.get_account_with_commitment(
&position_address,
CommitmentConfig::finalized()
)?;

if let Some(account) = account.value {
println!("Account exists: {} bytes", account.data.len());
}

2. Transaction Simulation Before Sending

// Build transaction
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&payer.pubkey()),
&[&payer],
recent_blockhash,
);

// Simulate first
match rpc_client.simulate_transaction(&transaction) {
Ok(result) => {
if let Some(err) = result.value.err {
println!("Simulation failed: {:?}", err);
if let Some(logs) = result.value.logs {
for log in logs {
println!(" {}", log);
}
}
return Err(anyhow!("Simulation error"));
}
println!("✅ Simulation successful");
}
Err(e) => {
println!("RPC error during simulation: {}", e);
return Err(e.into());
}
}

// Now send for real
let signature = rpc_client.send_and_confirm_transaction(&transaction)?;

3. Parsing Account Data

fn parse_position_limits(account_data: &[u8]) -> Result<(u128, u128)> {
if account_data.len() < 216 {
return Err(anyhow!("Account data too short"));
}

// Extract limit order sqrt prices
let lower_bytes: [u8; 16] = account_data[184..200]
.try_into()
.unwrap();
let upper_bytes: [u8; 16] = account_data[200..216]
.try_into()
.unwrap();

let lower_sqrt_price = u128::from_le_bytes(lower_bytes);
let upper_sqrt_price = u128::from_le_bytes(upper_bytes);

Ok((lower_sqrt_price, upper_sqrt_price))
}

// Usage
let account = rpc_client.get_account(&position_pda)?;
let (lower, upper) = parse_position_limits(&account.data)?;

// Convert to human-readable prices
let lower_price = sqrt_price_to_price(lower, 9, 6);
let upper_price = sqrt_price_to_price(upper, 9, 6);

println!("Buy limit: ${:.2}", lower_price);
println!("Sell limit: ${:.2}", upper_price);

4. Watching for Transaction Confirmation

use solana_sdk::signature::Signature;
use std::time::Duration;

fn wait_for_confirmation(
rpc_client: &RpcClient,
signature: &Signature,
max_retries: u32,
) -> Result<()> {
for i in 0..max_retries {
std::thread::sleep(Duration::from_secs(2));

if let Ok(status) = rpc_client.get_signature_status(signature) {
if let Some(result) = status {
if let Err(e) = result {
return Err(anyhow!("Transaction failed: {:?}", e));
}
println!("✅ Transaction confirmed in slot");
return Ok(());
}
}

if i == max_retries - 1 {
return Err(anyhow!("Transaction timeout"));
}
}

Ok(())
}

Advanced: Modify Position with Collateral

The modify_tuna_spot_position_orca instruction adds collateral and activates trading:

// From IDL
pub struct ModifyTunaSpotPositionOrca {
pub decrease_percent: u32, // 0 for increase
pub collateral_amount: u64, // USDC to deposit
pub borrow_amount: u64, // SOL to borrow from vault
pub required_swap_amount: u64, // 0 for auto-calc
pub remaining_accounts_info: RemainingAccountsInfo,
}

// Required accounts (24 total)
accounts: vec![
AccountMeta::new(authority, true),
AccountMeta::new_readonly(tuna_config, false),
AccountMeta::new_readonly(mint_a, false),
AccountMeta::new_readonly(mint_b, false),
AccountMeta::new_readonly(token_program_a, false),
AccountMeta::new_readonly(token_program_b, false),
AccountMeta::new(market, false),
AccountMeta::new(vault_a, false),
AccountMeta::new(vault_b, false),
AccountMeta::new(vault_a_ata, false),
AccountMeta::new(vault_b_ata, false),
AccountMeta::new(tuna_spot_position, false),
AccountMeta::new(tuna_position_ata_a, false),
AccountMeta::new(tuna_position_ata_b, false),
AccountMeta::new(authority_ata_a, false),
AccountMeta::new(authority_ata_b, false),
AccountMeta::new(fee_recipient_ata_a, false),
AccountMeta::new(fee_recipient_ata_b, false),
AccountMeta::new_readonly(pyth_oracle_price_feed_a, false),
AccountMeta::new_readonly(pyth_oracle_price_feed_b, false),
AccountMeta::new_readonly(whirlpool_program, false),
AccountMeta::new(whirlpool, false),
AccountMeta::new_readonly(memo_program, false),
AccountMeta::new_readonly(system_program::ID, false),
]

Why so many accounts?

  • DeFiTuna needs to interact with lending vaults
  • Price oracles (Pyth) for health checks
  • Fee collection accounts
  • Orca Whirlpool state for actual swaps
  • Multiple token accounts for each token in the pair

Price Calculation Mathematics

Square Root Price Encoding

DeFiTuna stores prices as P264\sqrt{P} \cdot 2^{64} (same as Uniswap V3):

sqrt_price=P26410(decimalsadecimalsb)\text{sqrt\_price} = \sqrt{P} \cdot 2^{64} \cdot 10^{(decimals_a - decimals_b)}

Example: SOL/USDC at $130

  • SOL decimals: 9
  • USDC decimals: 6
  • Adjustment: 10(96)=100010^{(9-6)} = 1000
sqrt_price=1301000264\text{sqrt\_price} = \sqrt{130 \cdot 1000} \cdot 2^{64} =13000018446744073709551616= \sqrt{130000} \cdot 18446744073709551616 =360.55518446744073709551616= 360.555 \cdot 18446744073709551616 =6651068162312125808640= 6651068162312125808640

Converting Back to Price

fn sqrt_price_to_price(sqrt_price: u128, decimals_a: u8, decimals_b: u8) -> f64 {
let decimal_diff = decimals_a as i32 - decimals_b as i32;
let price_raw = (sqrt_price as f64 / (1u128 << 64) as f64).powi(2);

if decimal_diff >= 0 {
price_raw / 10_f64.powi(decimal_diff)
} else {
price_raw * 10_f64.powi(-decimal_diff)
}
}

// Verify on-chain data
let lower_sqrt_price = 6651068162312125808640u128;
let price = sqrt_price_to_price(lower_sqrt_price, 9, 6);
assert_eq!(price, 130.0);

Testing on Mainnet vs Devnet

Devnet Limitations

# DeFiTuna pools don't exist on devnet
$ solana account 9m96e4CieVMjTC7vP1a1pM3qfn5A5kHRPs3SrsVZBGqt --url devnet
Error: AccountNotFound

# Orca Whirlpools also limited on devnet

Recommendation: Test on mainnet with minimal amounts (0.01-0.1 SOL).

Mainnet Testing Strategy

  1. Fund wallet: 0.1 SOL (~$13.65 at current prices)
  2. Gas budget: ~0.000005 SOL per transaction
  3. Rent: ~0.00329904 SOL (recoverable on close)
  4. Test sequence:
    • Open position: ~$0.0007 gas
    • Set limits: ~$0.0007 gas
    • Close position: ~$0.0007 gas + recover rent

Total cost: ~$0.002 for full testing cycle

Production Considerations

1. Error Handling

#[derive(Debug)]
pub enum DefiTunaError {
#[error("Position does not exist")]
PositionNotFound,

#[error("Invalid tick range")]
InvalidTickRange,

#[error("Insufficient collateral")]
InsufficientCollateral,

#[error("RPC error: {0}")]
RpcError(#[from] solana_client::client_error::ClientError),
}

// Usage
match rpc_client.get_account(&position_pda) {
Ok(_) => { /* process */ },
Err(_) => return Err(DefiTunaError::PositionNotFound),
}

2. Rate Limiting

use std::time::{Duration, Instant};

struct RateLimiter {
last_request: Instant,
min_interval: Duration,
}

impl RateLimiter {
pub fn new(requests_per_second: u32) -> Self {
Self {
last_request: Instant::now(),
min_interval: Duration::from_millis(1000 / requests_per_second as u64),
}
}

pub fn wait(&mut self) {
let elapsed = self.last_request.elapsed();
if elapsed < self.min_interval {
std::thread::sleep(self.min_interval - elapsed);
}
self.last_request = Instant::now();
}
}

// Usage with public RPC (limit to 10 req/s)
let mut limiter = RateLimiter::new(10);

for position in positions {
limiter.wait();
let account = rpc_client.get_account(&position)?;
// process...
}

3. Transaction Retry Logic

const MAX_RETRIES: u32 = 3;

fn send_with_retry(
rpc_client: &RpcClient,
transaction: &Transaction,
) -> Result<Signature> {
let mut last_error = None;

for attempt in 1..=MAX_RETRIES {
match rpc_client.send_and_confirm_transaction(transaction) {
Ok(signature) => return Ok(signature),
Err(e) => {
println!("Attempt {}/{} failed: {}", attempt, MAX_RETRIES, e);
last_error = Some(e);

if attempt < MAX_RETRIES {
std::thread::sleep(Duration::from_secs(2_u64.pow(attempt)));
}
}
}
}

Err(last_error.unwrap().into())
}

Complete Working Example

Here's a full program that opens a position, sets limits, and closes:

use anyhow::Result;
use solana_client::rpc_client::RpcClient;
use solana_sdk::{
instruction::{AccountMeta, Instruction},
pubkey::Pubkey,
signature::{Keypair, Signer},
transaction::Transaction,
};
use std::str::FromStr;

const DEFITUNA_PROGRAM: &str = "tuna4uSQZncNeeiAMKbstuxA9CUkHH6HmC64wgmnogD";
const WHIRLPOOL: &str = "Czfq3xZZDmsdGdUyrNLtRhGc47cXcZtLG4crryfu44zE";
const SOL_MINT: &str = "So11111111111111111111111111111111111111112";
const USDC_MINT: &str = "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v";

fn main() -> Result<()> {
// Initialize
let rpc_url = "https://api.mainnet-beta.solana.com";
let rpc_client = RpcClient::new(rpc_url);

let executor_keypair = read_keypair_from_env()?;
let program_id = Pubkey::from_str(DEFITUNA_PROGRAM)?;
let whirlpool = Pubkey::from_str(WHIRLPOOL)?;

// Derive position PDA
let (position_pda, _) = Pubkey::find_program_address(
&[
b"tuna_spot_position",
executor_keypair.pubkey().as_ref(),
whirlpool.as_ref(),
],
&program_id,
);

println!("Position PDA: {}", position_pda);

// Step 1: Open position
let open_ix = build_open_position_instruction(
&program_id,
&executor_keypair.pubkey(),
&whirlpool,
&position_pda,
)?;

let sig1 = send_instruction(&rpc_client, &executor_keypair, open_ix)?;
println!("✅ Position opened: {}", sig1);

// Step 2: Set limit orders
let limits_ix = build_set_limits_instruction(
&program_id,
&executor_keypair.pubkey(),
&position_pda,
130.0, // buy at $130
145.0, // sell at $145
)?;

let sig2 = send_instruction(&rpc_client, &executor_keypair, limits_ix)?;
println!("✅ Limits set: {}", sig2);

// Step 3: Verify on-chain
std::thread::sleep(std::time::Duration::from_secs(5));
let account = rpc_client.get_account(&position_pda)?;
let (lower, upper) = parse_limits(&account.data)?;
println!("📊 On-chain limits: ${:.2} / ${:.2}", lower, upper);

// Step 4: Close position
let close_ix = build_close_position_instruction(
&program_id,
&executor_keypair.pubkey(),
&position_pda,
)?;

let sig3 = send_instruction(&rpc_client, &executor_keypair, close_ix)?;
println!("✅ Position closed: {}", sig3);

Ok(())
}

fn send_instruction(
rpc_client: &RpcClient,
payer: &Keypair,
instruction: Instruction,
) -> Result<String> {
let recent_blockhash = rpc_client.get_latest_blockhash()?;
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&payer.pubkey()),
&[payer],
recent_blockhash,
);

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
Ok(signature.to_string())
}

// Helper functions omitted for brevity
// See full implementation in GitHub repository

Deployment Checklist

Before deploying to production:

  • Test on mainnet with small amounts first
  • Implement comprehensive error handling
  • Add transaction retry logic
  • Set up monitoring and alerts
  • Use paid RPC endpoint (Helius, Triton, QuickNode)
  • Implement rate limiting
  • Add logging for all RPC calls
  • Store transaction signatures for audit
  • Test limit order execution in both directions
  • Verify position health calculations
  • Plan for emergency position closure

Conclusion

Building on DeFiTuna requires understanding:

  1. PDA Derivation: Positions are deterministic based on authority + pool
  2. Instruction Encoding: Discriminators + args in little-endian format
  3. RPC Patterns: Simulation, confirmation, retry logic
  4. On-Chain Data: Reading and parsing account bytes
  5. SDK Integration: When to use SDK vs raw transactions

The mainnet transactions in this guide prove that limit orders are truly stored on-chain and executable without active monitoring. Once set, the DeFiTuna protocol monitors prices and executes trades automatically.

Key Insight: DeFiTuna is a protocol abstraction layer, not a standalone AMM. It wraps existing liquidity sources (Orca, Fusion) with advanced order types, making it a powerful tool for automated trading strategies on Solana.

Resources

Deploying Real-Time Solana Data Streams on Cloudflare Containers with LaserStream

· 13 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

LaserStream deployed on Cloudflare Containers

TL;DR: Deploy a production-ready real-time Solana slot streaming service using Helius LaserStream SDK on Cloudflare Containers.

  • Ultra-low latency: Real-time slot updates via gRPC
  • Global edge deployment: Cloudflare's global network
  • Auto-scaling: Container lifecycle managed by Durable Objects
  • Production-ready: Health checks, error handling, and observability

Why LaserStream on Cloudflare?

Helius LaserStream provides ultra-low latency access to Solana data via gRPC streaming. Traditional WebSocket polling introduces delays; LaserStream eliminates this with direct gRPC connections to Helius nodes.

Why Cloudflare Containers?

Traditional deployments require server provisioning, load balancing, and scaling. Cloudflare Containers solve this:

FeatureTraditional VPSCloudflare Containers
Global deploymentManual multi-region setupAutomatic edge deployment
ScalingManual or autoscaling groupsAuto-scaling via Durable Objects
Cold startAlways running (cost)Sleep after inactivity
gRPC supportYesYes (in Containers, not Workers)
SSL/TLSManual cert managementAutomatic
DDoS protectionAdditional serviceBuilt-in

Architecture overview

Key components:

  1. Cloudflare Worker (TypeScript/Hono): HTTP API layer, routing, health checks
  2. Durable Object: Singleton manager for container lifecycle
  3. Rust Container (Axum): gRPC client for LaserStream, HTTP server for API
  4. Helius LaserStream: Real-time Solana data via gRPC

Project structure

laserstream-container/
├── src/
│ └── index.ts # Worker (Hono API + Durable Object routing)
├── container_src/
│ ├── Cargo.toml # Rust dependencies
│ └── src/
│ ├── main.rs # Axum HTTP server
│ └── stream.rs # LaserStream gRPC client
├── Dockerfile # Multi-stage Rust build
├── wrangler.jsonc # Cloudflare configuration
├── package.json # Build and deployment scripts
└── tsconfig.json # TypeScript configuration

Prerequisites

Before deploying, ensure you have:

  • Cloudflare account with Workers enabled
  • Helius API key (get one here - free tier available)
  • Docker Desktop (for building container images)
  • Node.js 20+ and pnpm
  • Wrangler CLI (npm install -g wrangler)

Authenticate with Cloudflare

wrangler login

This opens a browser to authorize Wrangler with your Cloudflare account.


Step 1: Worker implementation

The Worker provides the HTTP API layer and routes requests to the container.

Install dependencies

pnpm install @cloudflare/containers hono
pnpm install -D typescript wrangler

Worker code (src/index.ts)

import { Container } from "@cloudflare/containers";
import { Hono } from "hono";

// Define the container class
export class LaserStreamContainer extends Container<Env> {
defaultPort = 8080;
sleepAfter = "2m"; // Sleep after 2 minutes of inactivity

envVars = {
HELIUS_API_KEY: "", // Set via wrangler secret
LASERSTREAM_ENDPOINT: "https://laserstream-devnet-ewr.helius-rpc.com",
RUST_LOG: "info",
};

override onStart() {
console.log("LaserStream container started");
}

override onStop() {
console.log("LaserStream container stopped");
}

override onError(error: unknown) {
console.error("LaserStream container error:", error);
}
}

// Create Hono app
const app = new Hono<{ Bindings: Env }>();

// Service information endpoint
app.get("/", (c) => {
return c.text(
"LaserStream on Cloudflare Containers\n\n" +
"Endpoints:\n" +
"GET /health - Health check\n" +
"POST /start - Start LaserStream subscription\n" +
"GET /latest - Get latest slot update\n"
);
});

// Worker health check
app.get("/health", (c) => {
return c.json({
status: "ok",
timestamp: new Date().toISOString()
});
});

// Proxy all other requests to the singleton container
app.all("*", async (c) => {
try {
// Get singleton container by name
const containerId = c.env.LASERSTREAM_CONTAINER.idFromName("laserstream-main");
const container = c.env.LASERSTREAM_CONTAINER.get(containerId);

// Forward request to container
return await container.fetch(c.req.raw);
} catch (error) {
console.error("Container error:", error);
return c.json(
{ error: "Container unavailable", details: String(error) },
500
);
}
});

export default app;

Key concepts

  • Durable Object singleton: idFromName("laserstream-main") ensures only one container instance handles all requests
  • Sleep after inactivity: Container sleeps after 2 minutes, saving costs
  • Error handling: Graceful fallback if container is unavailable
  • Environment variables: Container receives config via envVars

Step 2: Rust container implementation

The Rust container runs the LaserStream gRPC client and exposes an HTTP API.

Container dependencies (container_src/Cargo.toml)

[package]
name = "laserstream_container"
version = "0.1.0"
edition = "2021"

[dependencies]
anyhow = "1.0"
axum = "0.7"
chrono = { version = "0.4", features = ["serde"] }
futures-util = "0.3"
helius-laserstream = "0.1.5"
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.49", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

HTTP server (container_src/src/main.rs)

use std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
};

use axum::{
extract::State,
http::StatusCode,
response::IntoResponse,
routing::{get, post},
Json, Router,
};
use serde::Serialize;
use tokio::sync::RwLock;
use tracing::{error, info};
use tracing_subscriber::EnvFilter;

mod stream;

#[derive(Clone)]
struct AppState {
started: Arc<AtomicBool>,
latest: Arc<RwLock<Option<LatestSlot>>>,
}

#[derive(Debug, Clone, Serialize)]
struct LatestSlot {
slot: u64,
parent: Option<u64>,
status: String,
created_at_rfc3339: Option<String>,
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Initialize logging
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.init();

let port: u16 = std::env::var("PORT")
.unwrap_or_else(|_| "8080".to_string())
.parse()?;

let state = AppState {
started: Arc::new(AtomicBool::new(false)),
latest: Arc::new(RwLock::new(None)),
};

// Start stream on boot
ensure_stream_started(state.clone()).await;

// Define routes
let app = Router::new()
.route("/health", get(health))
.route("/start", post(start))
.route("/latest", get(latest))
.with_state(state);

let addr = SocketAddr::from(([0, 0, 0, 0], port));
info!("listening on {}", addr);

let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app).await?;

Ok(())
}

async fn health() -> impl IntoResponse {
(StatusCode::OK, "ok\n")
}

async fn start(State(state): State<AppState>) -> impl IntoResponse {
ensure_stream_started(state).await;
(StatusCode::OK, "started\n")
}

async fn latest(State(state): State<AppState>) -> impl IntoResponse {
let guard = state.latest.read().await;
match guard.as_ref() {
Some(slot) => (StatusCode::OK, Json(slot.clone())),
None => (
StatusCode::NOT_FOUND,
Json(LatestSlot {
slot: 0,
parent: None,
status: "no data".to_string(),
created_at_rfc3339: None,
}),
),
}
}

async fn ensure_stream_started(state: AppState) {
if !state.started.swap(true, Ordering::SeqCst) {
tokio::spawn(async move {
if let Err(e) = stream::run_slot_stream(state.clone()).await {
error!("Stream error: {}", e);
state.started.store(false, Ordering::SeqCst);
}
});
}
}

LaserStream client (container_src/src/stream.rs)

use anyhow::{anyhow, Context};
use chrono::{DateTime, Utc};
use futures_util::StreamExt;
use tokio::pin;
use tracing::{info, warn};

use crate::{AppState, LatestSlot};
use helius_laserstream::{
config::LaserstreamConfig,
grpc::{subscribe_update::UpdateOneof, SubscribeRequest},
client::subscribe,
};

pub async fn run_slot_stream(state: AppState) -> anyhow::Result<()> {
let endpoint = std::env::var("LASERSTREAM_ENDPOINT")
.context("LASERSTREAM_ENDPOINT is required")?;
let api_key = std::env::var("HELIUS_API_KEY")
.context("HELIUS_API_KEY is required")?;

info!("connecting to LaserStream at {}", endpoint);

let config = LaserstreamConfig {
endpoint,
x_token: Some(api_key),
};

let request = SubscribeRequest {
slots: HashMap::new(),
accounts: HashMap::new(),
transactions: HashMap::new(),
blocks: HashMap::new(),
blocks_meta: HashMap::new(),
entry: HashMap::new(),
commitment: None,
accounts_data_slice: vec![],
ping: None,
};

let mut stream = subscribe(config, request).await?;
pin!(stream);

info!("LaserStream connected, waiting for updates...");

while let Some(msg) = stream.next().await {
match msg {
Ok(update) => {
if let Some(UpdateOneof::Slot(slot_update)) = update.update_oneof {
let created_at = slot_update
.created_at
.and_then(|ts| {
DateTime::from_timestamp(ts.seconds, ts.nanos as u32)
})
.map(|dt: DateTime<Utc>| dt.to_rfc3339());

let latest = LatestSlot {
slot: slot_update.slot,
parent: Some(slot_update.parent),
status: format!("{:?}", slot_update.status),
created_at_rfc3339: created_at,
};

*state.latest.write().await = Some(latest.clone());
info!("slot update: {:?}", latest);
}
}
Err(e) => {
warn!("stream error: {}", e);
return Err(anyhow!("stream error: {}", e));
}
}
}

Ok(())
}

Step 3: Dockerfile

Build the Rust container with a multi-stage Dockerfile for minimal image size.

# syntax=docker/dockerfile:1

FROM rust:1.83-slim AS build

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
protobuf-compiler \
build-essential \
g++ \
&& rm -rf /var/lib/apt/lists/*

# Copy Rust source
COPY container_src/Cargo.toml ./
COPY container_src/src ./src

# Build release binary
RUN cargo build --release

# Runtime image
FROM debian:bookworm-slim
RUN apt-get update && \
apt-get install -y ca-certificates libssl3 && \
rm -rf /var/lib/apt/lists/*

COPY --from=build /app/target/release/laserstream_container /laserstream_container
EXPOSE 8080

CMD ["/laserstream_container"]

Build optimizations

  • Multi-stage build: Build stage uses full Rust toolchain, runtime uses minimal Debian
  • Dependency caching: Cargo dependencies cached in Docker layers
  • Release build: Optimized binary with --release
  • Minimal runtime: Only ca-certificates and libssl3 in final image

Step 4: Wrangler configuration

Configure the Worker and Container deployment.

wrangler.jsonc

{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "laserstream-container",
"main": "src/index.ts",
"compatibility_date": "2025-01-08",
"compatibility_flags": ["nodejs_compat"],

"observability": {
"enabled": true
},

"containers": [
{
"class_name": "LaserStreamContainer",
"image": "registry.cloudflare.com/<ACCOUNT_ID>/laserstream-container-rust:v1.0.0",
"max_instances": 10
}
],

"durable_objects": {
"bindings": [
{
"class_name": "LaserStreamContainer",
"name": "LASERSTREAM_CONTAINER"
}
]
},

"migrations": [
{
"new_sqlite_classes": ["LaserStreamContainer"],
"tag": "v1"
}
],

"vars": {
"LASERSTREAM_ENDPOINT": "https://laserstream-devnet-ewr.helius-rpc.com"
}
}

Replace <ACCOUNT_ID> with your Cloudflare account ID (find it in the Cloudflare dashboard).

Configuration explained

  • compatibility_date: API version for Workers runtime
  • containers: Container image registry path and scaling settings
  • durable_objects: Singleton container manager binding
  • migrations: Database schema for Durable Objects
  • vars: Environment variables (non-sensitive)

Step 5: Build and deploy

Build scripts (package.json)

{
"name": "laserstream-container",
"version": "1.0.0",
"scripts": {
"build": "tsc && cargo build --release --manifest-path=container_src/Cargo.toml",
"build:container": "wrangler containers build . --tag laserstream-container-rust:latest",
"push:container": "wrangler containers push laserstream-container-rust:latest",
"deploy": "wrangler deploy",
"dev": "wrangler dev",
"tail": "wrangler tail",
"secret:set": "wrangler secret put"
},
"dependencies": {
"@cloudflare/containers": "^0.0.21",
"hono": "4.11.1"
},
"devDependencies": {
"@types/node": "^25.0.3",
"typescript": "5.9.3",
"wrangler": "4.58.0"
}
}

Build the container image

# Build Rust container locally
pnpm run build:container

# Tag with version
docker tag laserstream-container-rust:latest laserstream-container-rust:v1.0.0

# Push to Cloudflare registry
pnpm run push:container

Expected output:

Building container image...
Successfully tagged laserstream-container-rust:latest
Pushing to registry.cloudflare.com/...
Image pushed successfully
Digest: sha256:59c03a69b057...

Set secrets

Before deploying, set the Helius API key:

echo "YOUR_HELIUS_API_KEY" | pnpm run secret:set HELIUS_API_KEY

Alternatively, use interactive mode:

pnpm run secret:set HELIUS_API_KEY
# Paste your API key when prompted

Deploy to Cloudflare

pnpm run deploy

Expected output:

Uploading Worker...
Published laserstream-container (0.42 sec)
https://laserstream-container.<your-subdomain>.workers.dev

Step 6: Testing the deployment

Health check

curl https://laserstream-container.<your-subdomain>.workers.dev/health

Expected response:

{
"status": "ok",
"timestamp": "2025-01-09T12:34:56.789Z"
}

Start LaserStream

curl -X POST https://laserstream-container.<your-subdomain>.workers.dev/start

Expected response:

started

Get latest slot update

curl https://laserstream-container.<your-subdomain>.workers.dev/latest

Expected response:

{
"slot": 285432167,
"parent": 285432166,
"status": "Confirmed",
"created_at_rfc3339": "2025-01-09T12:35:01.234Z"
}

Monitoring and debugging

View live logs

pnpm run tail

Expected output:

2025-01-09T12:34:56.789Z INFO laserstream_container: listening on 0.0.0.0:8080
2025-01-09T12:35:01.234Z INFO laserstream_container: connecting to LaserStream at https://laserstream-devnet-ewr.helius-rpc.com
2025-01-09T12:35:02.456Z INFO laserstream_container: LaserStream connected, waiting for updates...
2025-01-09T12:35:03.678Z INFO laserstream_container: slot update: LatestSlot { slot: 285432167, ... }

Common issues

"Missing or invalid API key"

Cause: HELIUS_API_KEY secret not set or incorrect.

Fix:

# Verify secret is set
wrangler secret list

# Re-set if missing
echo "YOUR_KEY" | pnpm run secret:set HELIUS_API_KEY

# Redeploy
pnpm run deploy

Container not starting

Cause: Docker image not pushed or incorrect registry path.

Fix:

# Verify image exists
docker images | grep laserstream

# Rebuild and push
pnpm run build:container
pnpm run push:container
pnpm run deploy

"Container unavailable" errors

Cause: Container sleeping or crashed.

Fix:

# Check logs
pnpm run tail

# Restart container
curl -X POST https://<your-url>/start

Production considerations

Scaling and costs

  • Cold starts: First request after sleep takes ~2-5 seconds to spin up container
  • Warm instances: Subsequent requests are instant while container is active
  • Sleep after: Configure sleepAfter based on request frequency
  • Max instances: Set max_instances based on expected load

Cost optimization

export class LaserStreamContainer extends Container<Env> {
sleepAfter = "5m"; // Sleep after 5 minutes for dev
// sleepAfter = "30m"; // Sleep after 30 minutes for production
}

Error handling

Add retry logic and circuit breakers:

// In stream.rs
pub async fn run_slot_stream(state: AppState) -> anyhow::Result<()> {
let mut retry_count = 0;
const MAX_RETRIES: u32 = 5;

loop {
match try_connect(&state).await {
Ok(_) => {
retry_count = 0; // Reset on success
}
Err(e) => {
retry_count += 1;
if retry_count >= MAX_RETRIES {
return Err(anyhow!("Max retries exceeded: {}", e));
}
let backoff = std::time::Duration::from_secs(2_u64.pow(retry_count));
warn!("Retry {} after {:?}: {}", retry_count, backoff, e);
tokio::time::sleep(backoff).await;
}
}
}
}

Multi-region deployment

For global low-latency access, use Cloudflare's automatic edge deployment:

{
"placement": { "mode": "smart" }
}

This automatically routes requests to the nearest Cloudflare edge location.

Security

  1. API key rotation: Regularly rotate HELIUS_API_KEY
  2. Rate limiting: Add rate limiting in Worker
  3. Authentication: Add bearer tokens for production
// In Worker
app.use("*", async (c, next) => {
const token = c.req.header("Authorization");
if (!token || token !== `Bearer ${c.env.API_SECRET}`) {
return c.json({ error: "Unauthorized" }, 401);
}
await next();
});

Integration examples

Polling from a trading bot

// jupiter-laserstream-bot/src/poller.ts
import { setInterval } from "timers/promises";

const CONTAINER_URL = "https://laserstream-container.<subdomain>.workers.dev";

async function pollLatestSlot() {
const response = await fetch(`${CONTAINER_URL}/latest`);
const data = await response.json();

console.log(`Latest slot: ${data.slot}`);

// Trigger trading logic
await handleSlotUpdate(data);
}

// Poll every 2 seconds
for await (const _ of setInterval(2000)) {
await pollLatestSlot();
}

WebSocket broadcasting

Convert HTTP polling to WebSocket for browser clients:

// websocket-bridge.ts
import { WebSocketServer } from "ws";

const wss = new WebSocketServer({ port: 8080 });
const CONTAINER_URL = "https://laserstream-container.<subdomain>.workers.dev";

wss.on("connection", (ws) => {
const interval = setInterval(async () => {
const response = await fetch(`${CONTAINER_URL}/latest`);
const data = await response.json();
ws.send(JSON.stringify(data));
}, 1000);

ws.on("close", () => clearInterval(interval));
});

Comparison with alternatives

ApproachLatencyCostComplexityScalability
WebSocket polling~500msLowLowManual
Traditional VPS~100msMediumHighManual
LaserStream + Cloudflare~50msLow (pay-per-use)MediumAutomatic
Direct gRPC~30msMediumHighManual

When to use this approach

Use Cloudflare Containers when:

  • You need global low-latency access
  • You want automatic scaling
  • You prefer pay-per-use pricing
  • You need DDoS protection

Use traditional VPS when:

  • You need full control over infrastructure
  • You have consistent high traffic (24/7)
  • You need specialized networking configurations

Conclusion

Deploying LaserStream on Cloudflare Containers provides a production-ready solution for real-time Solana data streaming with:

  • Global edge deployment: Automatic routing to nearest edge location
  • Auto-scaling: Container lifecycle managed by Durable Objects
  • Cost efficiency: Pay only for active container time
  • Developer experience: Simple deployment with Wrangler CLI

The combination of Helius LaserStream's ultra-low latency gRPC streaming and Cloudflare's global network creates a powerful platform for building real-time Solana applications.

Next steps

  • Add caching: Cache slot updates in Durable Object storage
  • Add metrics: Integrate with Cloudflare Analytics
  • Add filtering: Filter specific accounts or programs
  • Add historical replay: Use LaserStream's historical slot replay (up to 3000 slots)

Resources

DefiTuna AMM Smart Contract — Anchor Implementation Guide

· 12 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR

DefiTuna AMM combines concentrated liquidity, on-chain limit orders, and leveraged liquidity provision in a unique AMM design. This article focuses on the Anchor smart contract that implements this protocol, specifically examining:

  • Program Architecture: A hybrid AMM with integrated limit order book, implemented as a multi-instruction Anchor program
  • Key Instructions: initialize_pool, create_position, place_limit_order, swap, leverage_position
  • Account Structure: Complex PDA hierarchies for pools, positions, orders, and leveraged positions with proper rent management
  • Mathematical Core: Concentrated liquidity calculations with leverage multipliers
  • Security Patterns: Comprehensive validation, owner controls, and reentrancy protection through Solana's native constraints

Introduction

DefiTuna AMM represents an innovative approach to decentralized exchanges by integrating traditional AMM mechanics with orderbook-style limit orders and leverage capabilities. This Anchor smart contract implements the core on-chain logic that makes this possible, providing developers with a practical example of advanced Solana DeFi programming patterns.

The program follows a modular architecture with separate handlers for pool management, position creation, order placement, and swap execution. All state is managed through PDAs to ensure secure ownership and access control.

Architecture Diagrams

Account Relationships

Instruction Flow for Limit Order Execution

Account Structure

AccountTypePDA SeedsPurpose
PoolState["pool", base_mint, quote_mint]Stores pool configuration and global state
PositionState["position", pool, owner, position_id]Tracks user's concentrated liquidity position
LimitOrderState["order", pool, owner, order_id]Stores limit order parameters and status
LeveragedPositionState["leverage", pool, owner, leverage_id]Manages leveraged liquidity positions
ProtocolConfigState["config"]Global protocol parameters and fee accounts
TickState["tick", pool, tick_index]Stores liquidity data for specific price ticks

PDA Derivation Example

#[account]
#[derive(InitSpace)]
pub struct Pool {
pub base_mint: Pubkey,
pub quote_mint: Pubkey,
pub fee_rate: u16, // basis points
pub protocol_fee_rate: u16,
pub tick_spacing: i32,
pub current_tick_index: i32,
pub liquidity: u128,
pub sqrt_price: u128,
pub fee_growth_global_0: u128,
pub fee_growth_global_1: u128,
pub protocol_fees_0: u64,
pub protocol_fees_1: u64,
pub bump: u8,
}

// PDA derivation for pool
let (pool_pda, pool_bump) = Pubkey::find_program_address(
&[
b"pool",
base_mint.as_ref(),
quote_mint.as_ref(),
],
program_id
);

## Instruction Handlers Deep Dive

### 1. `initialize_pool` - Pool Creation

This instruction creates a new liquidity pool with specified parameters:

```rust
#[derive(Accounts)]
pub struct InitializePool<'info> {
#[account(mut)]
pub payer: Signer<'info>,

#[account(
init,
payer = payer,
space = 8 + Pool::INIT_SPACE,
seeds = [
b"pool",
base_mint.key().as_ref(),
quote_mint.key().as_ref()
],
bump
)]
pub pool: Account<'info, Pool>,

pub base_mint: Account<'info, Mint>,
pub quote_mint: Account<'info, Mint>,

pub system_program: Program<'info, System>,
}

pub fn initialize_pool(
ctx: Context<InitializePool>,
fee_rate: u16,
tick_spacing: i32,
initial_sqrt_price: u128,
) -> Result<()> {
let pool = &mut ctx.accounts.pool;

// Validate parameters
require!(fee_rate <= MAX_FEE_RATE, ErrorCode::InvalidFeeRate);
require!(tick_spacing > 0, ErrorCode::InvalidTickSpacing);

// Initialize pool state
pool.base_mint = ctx.accounts.base_mint.key();
pool.quote_mint = ctx.accounts.quote_mint.key();
pool.fee_rate = fee_rate;
pool.tick_spacing = tick_spacing;
pool.sqrt_price = initial_sqrt_price;
pool.current_tick_index = calculate_tick_from_sqrt_price(initial_sqrt_price);
pool.bump = ctx.bumps.pool;

Ok(())
}

### 2. `create_position` - Concentrated Liquidity Position

Creates a position with liquidity concentrated between specified ticks:

```rust
#[derive(Accounts)]
#[instruction(position_id: u64)]
pub struct CreatePosition<'info> {
#[account(mut)]
pub owner: Signer<'info>,

#[account(
seeds = [
b"pool",
pool.base_mint.as_ref(),
pool.quote_mint.as_ref()
],
bump = pool.bump
)]
pub pool: Account<'info, Pool>,

#[account(
init,
payer = owner,
space = 8 + Position::INIT_SPACE,
seeds = [
b"position",
pool.key().as_ref(),
owner.key().as_ref(),
&position_id.to_le_bytes()
],
bump
)]
pub position: Account<'info, Position>,

#[account(mut)]
pub token_account_a: Account<'info, TokenAccount>,
#[account(mut)]
pub token_account_b: Account<'info, TokenAccount>,

pub token_program: Program<'info, Token>,
pub system_program: Program<'info, System>,
}

#[account]
#[derive(InitSpace)]
pub struct Position {
pub pool: Pubkey,
pub owner: Pubkey,
pub liquidity: u128,
pub tick_lower: i32,
pub tick_upper: i32,
pub fee_growth_inside_0_last: u128,
pub fee_growth_inside_1_last: u128,
pub tokens_owed_0: u64,
pub tokens_owed_1: u64,
pub bump: u8,
}

pub fn create_position(
ctx: Context<CreatePosition>,
position_id: u64,
tick_lower: i32,
tick_upper: i32,
liquidity_delta: u128,
) -> Result<()> {
let pool = &mut ctx.accounts.pool;
let position = &mut ctx.accounts.position;

// Validate tick bounds
require!(tick_lower < tick_upper, ErrorCode::InvalidTickRange);
require!(tick_lower % pool.tick_spacing == 0, ErrorCode::TickNotSpaced);
require!(tick_upper % pool.tick_spacing == 0, ErrorCode::TickNotSpaced);

// Calculate required token amounts
let (amount_a, amount_b) = calculate_liquidity_amounts(
pool.sqrt_price,
tick_lower,
tick_upper,
liquidity_delta
);

// Transfer tokens from user
transfer_tokens_in(
&ctx.accounts.token_account_a,
&ctx.accounts.token_account_b,
amount_a,
amount_b,
&ctx.accounts.token_program,
&ctx.accounts.owner
)?;

// Update position state
position.pool = ctx.accounts.pool.key();
position.owner = ctx.accounts.owner.key();
position.liquidity = liquidity_delta;
position.tick_lower = tick_lower;
position.tick_upper = tick_upper;
position.bump = ctx.bumps.position;

// Update pool liquidity
update_ticks_liquidity(pool, tick_lower, tick_upper, liquidity_delta, true)?;

Ok(())
}

### 3. `place_limit_order` - Orderbook Integration

Creates a limit order that sits on the order book until matched:

```rust
#[derive(Accounts)]
#[instruction(order_id: u64)]
pub struct PlaceLimitOrder<'info> {
#[account(mut)]
pub owner: Signer<'info>,

#[account(
seeds = [
b"pool",
pool.base_mint.as_ref(),
pool.quote_mint.as_ref()
],
bump = pool.bump
)]
pub pool: Account<'info, Pool>,

#[account(
init,
payer = owner,
space = 8 + LimitOrder::INIT_SPACE,
seeds = [
b"order",
pool.key().as_ref(),
owner.key().as_ref(),
&order_id.to_le_bytes()
],
bump
)]
pub order: Account<'info, LimitOrder>,

#[account(
mut,
token::mint = pool.base_mint,
token::authority = owner
)]
pub user_token_account: Account<'info, TokenAccount>,

#[account(
init,
payer = owner,
token::mint = pool.base_mint,
token::authority = order,
seeds = [
b"order_vault",
order.key().as_ref()
],
bump
)]
pub order_vault: Account<'info, TokenAccount>,

pub token_program: Program<'info, Token>,
pub system_program: Program<'info, System>,
}

#[account]
#[derive(InitSpace)]
pub struct LimitOrder {
pub pool: Pubkey,
pub owner: Pubkey,
pub order_id: u64,
pub tick: i32,
pub amount: u64,
pub is_bid: bool,
pub filled_amount: u64,
pub status: OrderStatus,
pub bump: u8,
}

pub fn place_limit_order(
ctx: Context<PlaceLimitOrder>,
order_id: u64,
tick: i32,
amount: u64,
is_bid: bool,
) -> Result<()> {
let order = &mut ctx.accounts.order;

// Validate tick alignment
let pool = &ctx.accounts.pool;
require!(tick % pool.tick_spacing == 0, ErrorCode::TickNotSpaced);

// Transfer tokens to order vault
let transfer_ctx = CpiContext::new(
ctx.accounts.token_program.to_account_info(),
Transfer {
from: ctx.accounts.user_token_account.to_account_info(),
to: ctx.accounts.order_vault.to_account_info(),
authority: ctx.accounts.owner.to_account_info(),
}
);

transfer(transfer_ctx, amount)?;

// Initialize order
order.pool = ctx.accounts.pool.key();
order.owner = ctx.accounts.owner.key();
order.order_id = order_id;
order.tick = tick;
order.amount = amount;
order.is_bid = is_bid;
order.filled_amount = 0;
order.status = OrderStatus::Open;
order.bump = ctx.bumps.order;

// Emit order placed event
emit!(OrderPlaced {
pool: ctx.accounts.pool.key(),
owner: ctx.accounts.owner.key(),
order_id,
tick,
amount,
is_bid,
timestamp: Clock::get()?.unix_timestamp,
});

Ok(())
}

### 4. `swap` - Execution with Order Matching

Executes a swap, potentially matching against limit orders:

```rust
pub fn swap(
ctx: Context<Swap>,
amount: u64,
sqrt_price_limit: u128,
is_exact_input: bool,
) -> Result<()> {
let pool = &mut ctx.accounts.pool;

// Calculate swap amounts
let (amount_in, amount_out, sqrt_price_new, liquidity) =
compute_swap_step(
pool.sqrt_price,
sqrt_price_limit,
pool.liquidity,
amount,
pool.fee_rate,
is_exact_input
)?;

// Check against limit orders in this tick range
let matched_orders = find_matching_orders(
pool,
pool.current_tick_index,
get_tick_from_sqrt_price(sqrt_price_new),
!is_exact_input // opposite side of trade
);

let mut total_matched = 0;
for order_info in matched_orders {
let order_account = &ctx.remaining_accounts[order_info.index];
let mut order = Account::<LimitOrder>::try_from(order_account)?;

let match_amount = min(order.amount - order.filled_amount, amount_out - total_matched);

// Execute against limit order
execute_against_limit_order(
&mut order,
match_amount,
&ctx.accounts.token_program,
&ctx.accounts.user_token_account,
&ctx.accounts.order_vaults[order_info.vault_index]
)?;

total_matched += match_amount;
if total_matched >= amount_out {
break;
}
}

// Update pool state
pool.sqrt_price = sqrt_price_new;
pool.liquidity = liquidity;

// Apply fees
let protocol_fee = amount_in
.checked_mul(pool.protocol_fee_rate as u64)
.unwrap()
.checked_div(10_000)
.unwrap();

if is_exact_input {
pool.protocol_fees_0 = pool.protocol_fees_0
.checked_add(protocol_fee)
.unwrap();
} else {
pool.protocol_fees_1 = pool.protocol_fees_1
.checked_add(protocol_fee)
.unwrap();
}

Ok(())
}

### 5. `leverage_position` - Leveraged Liquidity

Enables leveraged liquidity provision with up to 5x multiplier:

```rust
pub fn leverage_position(
ctx: Context<LeveragePosition>,
leverage_id: u64,
position_key: Pubkey,
leverage_multiplier: u8,
collateral_amount: u64,
) -> Result<()> {
require!(leverage_multiplier >= 1, ErrorCode::InvalidLeverage);
require!(leverage_multiplier <= MAX_LEVERAGE, ErrorCode::ExceedsMaxLeverage);

let position = &ctx.accounts.position;
let leveraged_position = &mut ctx.accounts.leveraged_position;

// Calculate borrowed amounts
let total_liquidity_value = calculate_position_value(position, ctx.accounts.pool.sqrt_price);
let collateral_value = calculate_token_value(collateral_amount, ctx.accounts.pool.sqrt_price);

let max_borrow_value = collateral_value
.checked_mul(leverage_multiplier as u64)
.unwrap()
.checked_sub(collateral_value)
.unwrap();

// Create leveraged position
leveraged_position.position = position_key;
leveraged_position.owner = ctx.accounts.owner.key();
leveraged_position.leverage_multiplier = leverage_multiplier;
leveraged_position.collateral_amount = collateral_amount;
leveraged_position.borrowed_amount_0 = calculate_borrow_amount_0(total_liquidity_value, max_borrow_value);
leveraged_position.borrowed_amount_1 = calculate_borrow_amount_1(total_liquidity_value, max_borrow_value);
leveraged_position.bump = ctx.bumps.leveraged_position;

// Health check: ensure position remains safe
let health_ratio = calculate_health_ratio(leveraged_position, ctx.accounts.pool.sqrt_price);
require!(health_ratio > MIN_HEALTH_RATIO, ErrorCode::InsufficientCollateral);

Ok(())
}

## Mathematical Formulas

### Concentrated Liquidity Calculations

The amount of token X and Y required for a liquidity position between ticks $t_L$ and $t_U$ is given by:

$$
\Delta x = \Delta L \cdot \left( \frac{1}{\sqrt{P}} - \frac{1}{\sqrt{P_U}} \right)
$$

$$
\Delta y = \Delta L \cdot \left( \sqrt{P} - \sqrt{P_L} \right)
$$

Where:
- $\Delta L$ is the liquidity delta
- $\sqrt{P}$ is the current square root price
- $\sqrt{P_L}, \sqrt{P_U}$ are square root prices at lower and upper ticks

### Swap Computation

For a swap with fee $f$ (in basis points):

Effective amount in after fees:

$$
\Delta x_{eff} = \Delta x \cdot \left(1 - \frac{f}{10^4}\right)
$$

Output amount:

$$
\Delta y = \frac{y \cdot \Delta x_{eff}}{x + \Delta x_{eff}}
$$

### Leverage Health Ratio

$$
\text{Health Ratio} = \frac{\text{Position Value}}{\text{Borrowed Value} \cdot \text{Liquidation Threshold}}
$$

Positions are liquidated when:

$$
\text{Health Ratio} < 1
$$

## Solana & Anchor Best Practices

### 1. Account Validation Patterns

Always validate accounts using Anchor's type system:

```rust
#[account(
constraint = token_account.mint == pool.base_mint,
constraint = token_account.owner == owner.key()
)]
pub token_account: Account<'info, TokenAccount>,

### 2. Compute Unit Optimization

Use iteration limits and batch processing for order matching:

```rust
const MAX_ORDERS_PER_SWAP: usize = 10;

for i in 0..min(remaining_orders.len(), MAX_ORDERS_PER_SWAP) {
// Process order
if compute_units_remaining() < SAFE_COMPUTE_LIMIT {
break;
}
}

### 3. Token-2022 Compatibility

Handle transfer fees by checking received amounts:

```rust
let balance_before = token_account.amount;
transfer(transfer_ctx, amount)?;
let balance_after = token_account.reload()?.amount;
let received_amount = balance_after.checked_sub(balance_before).unwrap();

### 4. Event Emission for Indexers

Emit structured events for easy off-chain processing:

```rust
#[event]
pub struct SwapEvent {
pub pool: Pubkey,
pub trader: Pubkey,
pub amount_in: u64,
pub amount_out: u64,
pub sqrt_price_before: u128,
pub sqrt_price_after: u128,
pub liquidity: u128,
pub timestamp: i64,
}

## Security Considerations

### 1. Access Control

All critical operations use PDA-based authority:

```rust
#[account(
seeds = [b"config"],
bump = config.bump,
constraint = config.admin == admin.key()
)]
pub config: Account<'info, ProtocolConfig>,

### 2. Input Validation

Validate all user inputs with appropriate bounds:

```rust
require!(tick_lower < tick_upper, ErrorCode::InvalidTickRange);
require!(fee_rate <= MAX_FEE_RATE, ErrorCode::InvalidFeeRate);
require!(amount > 0, ErrorCode::ZeroAmount);

### 3. Arithmetic Safety

Use checked arithmetic to prevent overflows:

```rust
let total = amount_a
.checked_add(amount_b)
.ok_or(ErrorCode::ArithmeticOverflow)?;

### 4. Reentrancy Protection

Solana's transaction model prevents reentrancy, but validate cross-program interactions:

```rust
// Ensure token accounts belong to the expected mints
require!(
token_account_a.mint == pool.base_mint &&
token_account_b.mint == pool.quote_mint,
ErrorCode::InvalidTokenAccount
);

### 5. Oracle Manipulation Protection

Use time-weighted prices for sensitive operations:

```rust
let price = calculate_time_weighted_price(
pool.sqrt_price_history,
Clock::get()?.unix_timestamp
);

## How to Use This Contract

### Building and Deploying

```bash
# Build the program
anchor build

# Deploy to devnet
anchor deploy --provider.cluster devnet

# Verify deployment
solana program show --programs

### Example TypeScript Client

```typescript
import * as anchor from "@coral-xyz/anchor";
import { Program } from "@coral-xyz/anchor";
import { DefiTunaAmm } from "../target/types/defi_tuna_amm";

async function createPosition() {
const provider = anchor.AnchorProvider.env();
anchor.setProvider(provider);

const program = anchor.workspace.DefiTunaAmm as Program<DefiTunaAmm>;

const [poolPda] = anchor.web3.PublicKey.findProgramAddressSync(
[
Buffer.from("pool"),
baseMint.toBuffer(),
quoteMint.toBuffer()
],
program.programId
);

const positionId = new anchor.BN(Date.now());
const [positionPda] = anchor.web3.PublicKey.findProgramAddressSync(
[
Buffer.from("position"),
poolPda.toBuffer(),
provider.wallet.publicKey.toBuffer(),
positionId.toArrayLike(Buffer, "le", 8)
],
program.programId
);

const tx = await program.methods
.createPosition(
positionId,
-6000, // tick_lower
6000, // tick_upper
new anchor.BN(1000000) // liquidity
)
.accounts({
pool: poolPda,
position: positionPda,
owner: provider.wallet.publicKey,
tokenAccountA: tokenAccountA,
tokenAccountB: tokenAccountB,
})
.rpc();

console.log("Transaction signature:", tx);
}

### Required Pre-Instructions

For complex operations like leveraged positions, you may need to:

1. Create associated token accounts
2. Approve token transfers
3. Initialize required PDAs
4. Fund accounts with minimum rent

## Extending the Contract

### Adding New Instructions

1. Define new account structs in `#[derive(Accounts)]`
2. Implement handler function with proper validation
3. Add to the `lib.rs` module exports
4. Update IDL generation

### Customization Points

- **Fee Models**: Modify `compute_swap_step` for dynamic fees
- **Order Types**: Extend `LimitOrder` for different order types (FOK, IOC)
- **Leverage Models**: Add new collateral types or liquidation mechanisms
- **Oracle Integration**: Incorporate Pyth or Switchboard for price feeds

### Testing Strategies

```rust
#[tokio::test]
async fn test_swap_with_limit_order_match() {
let mut test = ProgramTest::new(
"defi_tuna_amm",
id(),
processor!(processor::Processor::process)
);

// Add accounts and mints
test.add_account(mint_pubkey, mint_account);

let (mut banks_client, payer, recent_blockhash) = test.start().await;

// Create and place limit order
let place_order_ix = Instruction {
program_id: id(),
accounts: place_order_accounts,
data: place_order_data,
};

// Execute swap that should match
let swap_ix = Instruction {
program_id: id(),
accounts: swap_accounts,
data: swap_data,
};

let transaction = Transaction::new_signed_with_payer(
&[place_order_ix, swap_ix],
Some(&payer.pubkey()),
&[&payer],
recent_blockhash
);

banks_client.process_transaction(transaction).await.unwrap();
}

## Conclusion

The DefiTuna AMM smart contract demonstrates advanced Anchor patterns for building sophisticated DeFi protocols on Solana. By combining concentrated liquidity, limit orders, and leverage in a single program, it showcases how to manage complex state relationships while maintaining security and efficiency.

Key takeaways for developers:
1. Use PDA hierarchies for secure ownership and access control
2. Implement mathematical operations with overflow protection
3. Design for composability with other Solana programs
4. Emit comprehensive events for off-chain indexing
5. Optimize for compute units in iteration-heavy operations

This contract serves as a foundation for building next-generation AMMs that bridge the gap between traditional order books and automated market makers.

Building a Production-Ready Jupiter Swap Integration on Solana with Anchor

· 27 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Jupiter swap integration architecture on Solana

TL;DR: Production swap execution engine with guardrails, operational controls, and analytics-ready telemetry.

  • Jupiter routes across 20+ venues for best execution
  • On-chain policy layer: fee collection, slippage caps, admin pause, auditability
  • Attributable, debuggable, measurable via structured event telemetry
  • Built for product teams: reliable execution, operational visibility, supportability

Why integrate Jupiter programmatically?

Jupiter aggregates liquidity from 20+ venues (CPAMM, CLMM, DLMM, PMM, order books) providing best execution, reduced slippage, and MEV protection through automatic routing and order splitting.

Why wrap Jupiter in your own program?

Direct Jupiter API usage is simple, but wrapping it in an Anchor program enables:

FeatureDirect APIAnchor program wrapper
Fee collectionmanual logicon-chain enforcement
Platform brandingclient-side onlyprogram-owned config
Access controlnoneadmin-gated pause/update
ComposabilitylimitedCPI-friendly for other protocols
Audit trailoff-chainon-chain events
Slippage protectionclient-sideprogram-enforced

Policy layer: what is enforced where

Understanding enforcement boundaries is critical for security and UX:

ConcernEnforced on-chainEnforced in clientNotes
Slippage ceilingYesYesOn-chain cap prevents hostile clients from bypassing limits
Fee collectionYesNoMust be deterministic; client cannot skip or reduce fees
Quote freshnessNoYesClient refreshes quotes; include quote timestamp in intent
Route allowlist/denylistOptionalOptionalUseful for risk control (e.g., block suspicious pools)
Pause / emergency stopYesNoAdmin can halt swaps immediately for incidents
Compute budgetNoYesClient requests higher compute units for complex routes
Intent deduplicationNoOff-chainBackend checks intent_id before indexing

Key principle: On-chain enforces non-bypassable invariants (fees, caps, pauses). Client enforces UX optimizations (quote refresh, compute). Off-chain systems handle analytics and deduplication.


Where this fits in a product

This swap integration is a component in an operating system, not a standalone feature. Understanding the full lifecycle is critical for product reliability:

Workflow: UI → Quote → Intent (idempotency key) → Execute (policy enforcement) → Telemetry (event emission) → Indexer (DB write) → Backoffice (dashboards, support)

Benefits: Idempotency prevents double-swaps, attribution tracks user sessions, debuggability via intent IDs, operability through conversion dashboards.


Intent model: idempotency + attribution

In production systems, you need reliable execution and clean analytics. The intent model provides both.

What is a swap intent?

A SwapIntent captures user action before execution:

interface SwapIntent {
intent_id: string; // UUIDv4 or client-generated unique ID
wallet: PublicKey; // User wallet address
input_mint: PublicKey; // Source token
output_mint: PublicKey; // Destination token
amount_in: u64; // Input amount (lamports)
max_slippage_bps: u16; // Max acceptable slippage
created_at: i64; // Client timestamp
client_version: string; // App version (e.g., "web-v2.1.3")
metadata?: Record<string, string>; // Campaign ID, referrer, etc.
}

Client-side intent creation

import { v4 as uuidv4 } from 'uuid';

function createSwapIntent(params: {
wallet: PublicKey;
inputMint: PublicKey;
outputMint: PublicKey;
amountIn: number;
slippageBps: number;
}): SwapIntent {
return {
intent_id: uuidv4(),
wallet: params.wallet,
input_mint: params.inputMint,
output_mint: params.outputMint,
amount_in: params.amountIn,
max_slippage_bps: params.slippageBps,
created_at: Date.now(),
client_version: process.env.NEXT_PUBLIC_APP_VERSION || 'unknown',
metadata: {
campaign_id: getCampaignId(),
referrer: document.referrer,
},
};
}

Benefits of the intent model

1. Idempotency (no double-swaps):

// User clicks "Swap" → creates intent
const intent = createSwapIntent(params);

// Network error on first attempt
try {
await executeSwap(intent); // Fails
} catch (error) {
// User retries with SAME intent_id
await executeSwap(intent); // Succeeds
}

// Backend deduplicates by intent_id
// Only ONE swap executed, even with multiple transactions

2. Consistent analytics: Every event has intent_id enabling quote-to-swap conversion tracking and funnel analysis.

3. Clean support: Intent IDs allow instant lookup of swap context (stale quote, slippage, route issues).

Storage: Store intents client-side (localStorage), off-chain DB (on initiation), and in events (correlation).


Architecture overview

Program structure (Anchor 0.32.1)

programs/token-swap/src/
├── state.rs # Account structures
│ └── JupiterConfig # 78 bytes: admin, fees, slippage, pause
├── constants.rs # Program IDs and seeds
│ ├── JUPITER_V6_PROGRAM_ID # JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4
│ └── JUPITER_CONFIG_SEED # "jupiter_config"
├── errors.rs # 8 custom error types
│ ├── InvalidAmount
│ ├── JupiterPaused
│ └── MinimumOutputNotMet
└── instructions/
├── init_jupiter_config.rs # Initialize config PDA
├── update_jupiter_config.rs # Admin updates
├── jupiter_swap.rs # Main swap execution
└── jupiter_route_swap.rs # Legacy route support

Account layout: JupiterConfig PDA

#[account]
pub struct JupiterConfig {
pub admin: Pubkey, // 32 bytes
pub fee_account: Pubkey, // 32 bytes
pub platform_fee_bps: u16, // 2 bytes (0-10000)
pub max_slippage_bps: u16, // 2 bytes (0-10000)
pub paused: bool, // 1 byte
pub bump: u8, // 1 byte
} // Total: 70 bytes (+ 8 discriminator = 78)

Design decisions:

  • u16 for BPS values (supports full 0-10000 range, 100% = 10000 BPS)
  • Platform fee ≤ 1000 BPS (10%) enforced at init/update
  • Max slippage ≤ 10000 BPS configurable per use case
  • Admin-controlled pause for emergency stops

Implementation deep-dive

1. Initialize configuration

#[derive(Accounts)]
pub struct InitJupiterConfig<'info> {
#[account(
init,
payer = admin,
space = 8 + 70,
seeds = [b"jupiter_config"],
bump
)]
pub config: Account<'info, JupiterConfig>,

#[account(mut)]
pub admin: Signer<'info>,

pub system_program: Program<'info, System>,
}

pub fn init_jupiter_config(
ctx: Context<InitJupiterConfig>,
fee_account: Pubkey,
platform_fee_bps: u16,
max_slippage_bps: u16,
) -> Result<()> {
require!(
platform_fee_bps <= 1000,
JupiterSwapError::InvalidPlatformFee
);
require!(
max_slippage_bps <= 10000,
JupiterSwapError::InvalidMaxSlippage
);

let config = &mut ctx.accounts.config;
config.admin = ctx.accounts.admin.key();
config.fee_account = fee_account;
config.platform_fee_bps = platform_fee_bps;
config.max_slippage_bps = max_slippage_bps;
config.paused = false;
config.bump = ctx.bumps.config;

Ok(())
}

Key validations:

  • Platform fee capped at 10% to prevent abuse
  • Max slippage configurable (typically 50-500 BPS for production)
  • PDA derivation ensures single config per program deployment

2. Execute Jupiter swap (CPI pattern)

#[derive(Accounts)]
pub struct JupiterSwap<'info> {
#[account(
seeds = [b"jupiter_config"],
bump = config.bump
)]
pub config: Account<'info, JupiterConfig>,

#[account(mut)]
pub user: Signer<'info>,

/// CHECK: Jupiter v6 program ID verified in instruction
pub jupiter_program: UncheckedAccount<'info>,

// Token accounts + remaining accounts for Jupiter routing
}

pub fn jupiter_swap<'info>(
ctx: Context<'_, '_, 'info, 'info, JupiterSwap<'info>>,
amount_in: u64,
minimum_amount_out: u64,
) -> Result<()> {
let config = &ctx.accounts.config;

// 1. Validate state
require!(!config.paused, JupiterSwapError::JupiterPaused);
require!(amount_in > 0, JupiterSwapError::InvalidAmount);

// 2. Verify Jupiter program ID
require_keys_eq!(
ctx.accounts.jupiter_program.key(),
JUPITER_V6_PROGRAM_ID.parse::<Pubkey>().unwrap(),
JupiterSwapError::InvalidJupiterProgram
);

// 3. Build CPI accounts (dynamically from remaining_accounts)
let mut accounts = Vec::new();
for account in ctx.remaining_accounts.iter() {
accounts.push(AccountMeta {
pubkey: *account.key,
is_signer: account.is_signer,
is_writable: account.is_writable,
});
}

// 4. Execute CPI to Jupiter
let swap_ix = Instruction {
program_id: ctx.accounts.jupiter_program.key(),
accounts,
data: swap_data, // Jupiter swap instruction data
};

invoke_signed(&swap_ix, ctx.remaining_accounts, &[])?;

// 5. Verify output amount (Token-2022 safe)
let output_amount = observe_vault_delta(); // observe vault delta
require!(
output_amount >= minimum_amount_out,
JupiterSwapError::MinimumOutputNotMet
);

// 6. Collect platform fee
if config.platform_fee_bps > 0 {
let fee = (output_amount as u128)
.checked_mul(config.platform_fee_bps as u128)
.unwrap()
.checked_div(10000)
.unwrap() as u64;

// Transfer fee to platform account
}

// 7. Emit event for analytics
emit!(JupiterSwapEvent {
user: ctx.accounts.user.key(),
amount_in,
amount_out: output_amount,
platform_fee: fee,
});

Ok(())
}

Critical implementation details:

Token-2022 compatibility

// WRONG: Trusting instruction amount
let output_amount = minimum_amount_out;

// CORRECT: Observe vault delta
let vault_before = user_output_token.amount;
// ... execute swap ...
user_output_token.reload()?;
let output_amount = user_output_token.amount
.saturating_sub(vault_before);

Transfer fees and hooks mean you cannot trust amounts in instruction data.

Remaining accounts pattern

Jupiter requires dynamic account lists (routes vary by liquidity):

// Frontend passes all necessary accounts
const remainingAccounts = [
{ pubkey: userSourceToken, isSigner: false, isWritable: true },
{ pubkey: userDestToken, isSigner: false, isWritable: true },
// ... all intermediary pool accounts from Jupiter API
];

Program must accept remaining_accounts via:

pub fn jupiter_swap<'info>(
ctx: Context<'_, '_, 'info, 'info, JupiterSwap<'info>>,
// ^^^ lifetime annotation required for remaining_accounts

3. Update configuration (admin-only)

pub fn update_jupiter_config(
ctx: Context<UpdateJupiterConfig>,
new_admin: Option<Pubkey>,
new_fee_account: Option<Pubkey>,
new_platform_fee_bps: Option<u16>,
new_max_slippage_bps: Option<u16>,
new_paused: Option<bool>,
) -> Result<()> {
let config = &mut ctx.accounts.config;

// Validate admin
require_keys_eq!(
ctx.accounts.admin.key(),
config.admin,
JupiterSwapError::Unauthorized
);

// Optional updates (partial update pattern)
if let Some(admin) = new_admin {
config.admin = admin;
}
if let Some(fee_bps) = new_platform_fee_bps {
require!(fee_bps <= 1000, JupiterSwapError::InvalidPlatformFee);
config.platform_fee_bps = fee_bps;
}
// ... other optional fields

Ok(())
}

Partial update pattern: All fields optional → supports single-field updates without re-specifying everything.


Frontend integration

React hook: useJupiter

import { useConnection, useWallet } from '@solana/wallet-adapter-react';
import { PublicKey, VersionedTransaction } from '@solana/web3.js';
import { useProgram } from './useSwapProgram';

export function useJupiter() {
const { connection } = useConnection();
const wallet = useWallet();
const program = useProgram();

async function getQuote(params: {
inputMint: string;
outputMint: string;
amount: number;
slippageBps?: number;
}) {
const response = await fetch(
`https://quote-api.jup.ag/v6/quote?` +
new URLSearchParams({
inputMint: params.inputMint,
outputMint: params.outputMint,
amount: params.amount.toString(),
slippageBps: (params.slippageBps || 50).toString(),
})
);
return response.json();
}

async function getSwapInstructions(quote: any) {
const response = await fetch('https://quote-api.jup.ag/v6/swap-instructions', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
quoteResponse: quote,
userPublicKey: wallet.publicKey!.toBase58(),
wrapAndUnwrapSol: true,
// Use versioned transactions for ALT support
useVersionedTransaction: true,
}),
});
return response.json();
}

async function executeSwapWithProgram(quote: any) {
if (!wallet.publicKey || !program) return;

const { swapInstruction } = await getSwapInstructions(quote);

// Get config PDA
const [configPda] = PublicKey.findProgramAddressSync(
[Buffer.from('jupiter_config')],
program.programId
);

// Build transaction via Anchor
const tx = await program.methods
.jupiterSwap(
new BN(quote.inAmount),
new BN(quote.outAmount)
)
.accounts({
config: configPda,
user: wallet.publicKey,
jupiterProgram: new PublicKey('JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4'),
})
.remainingAccounts(swapInstruction.accounts) // Dynamic routing accounts
.transaction();

// Handle Address Lookup Tables if present
if (swapInstruction.addressLookupTableAccounts?.length > 0) {
const lookupTables = await Promise.all(
swapInstruction.addressLookupTableAccounts.map((key: string) =>
connection.getAddressLookupTable(new PublicKey(key))
)
);

// Build versioned transaction
const message = new TransactionMessage({
payerKey: wallet.publicKey,
recentBlockhash: (await connection.getLatestBlockhash()).blockhash,
instructions: tx.instructions,
}).compileToV0Message(lookupTables.map(lt => lt.value!));

const versionedTx = new VersionedTransaction(message);
await wallet.sendTransaction(versionedTx, connection);
} else {
await wallet.sendTransaction(tx, connection);
}
}

return { getQuote, executeSwapWithProgram };
}

Key frontend considerations:

Address Lookup Tables (ALTs)

Complex Jupiter routes exceed the 1232-byte transaction limit. ALTs compress account lists:

// Without ALT: 32 bytes per account × 40 accounts = 1280 bytes (fails)
// With ALT: table reference + indices = ~50 bytes (works)

Use versioned transactions (v0) to support ALTs.

Quote freshness

Jupiter quotes expire quickly (10-30 seconds):

const quote = await getQuote(params);
// Wait too long...
await sleep(60000); // Quote now stale
await executeSwap(quote); // Likely fails with slippage error

Best practice: poll quotes every 5-10 seconds during user review.


UI component: JupiterSwap

'use client';

import { useState, useEffect } from 'react';
import { useJupiter } from '@/hooks/useJupiter';

const TOKENS = {
SOL: 'So11111111111111111111111111111111111111112',
USDC: 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v',
USDT: 'Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB',
JUP: 'JUPyiwrYJFskUPiHa7hkeR8VUtAeFoSYbKedZNsDvCN',
};

export function JupiterSwap() {
const { getQuote, executeSwapWithProgram } = useJupiter();
const [inputMint, setInputMint] = useState(TOKENS.SOL);
const [outputMint, setOutputMint] = useState(TOKENS.USDC);
const [amount, setAmount] = useState('1.0');
const [quote, setQuote] = useState<any>(null);
const [loading, setLoading] = useState(false);

// Auto-refresh quote every 10 seconds
useEffect(() => {
const interval = setInterval(async () => {
if (amount && parseFloat(amount) > 0) {
const q = await getQuote({
inputMint,
outputMint,
amount: parseFloat(amount) * 1e9, // Convert to lamports
slippageBps: 50,
});
setQuote(q);
}
}, 10000);
return () => clearInterval(interval);
}, [inputMint, outputMint, amount]);

const handleSwap = async () => {
setLoading(true);
try {
await executeSwapWithProgram(quote);
// Success notification
} catch (error) {
console.error('Swap failed:', error);
} finally {
setLoading(false);
}
};

return (
<div className="swap-card">
<div className="input-section">
<input
type="number"
value={amount}
onChange={(e) => setAmount(e.target.value)}
placeholder="Amount"
/>
<select value={inputMint} onChange={(e) => setInputMint(e.target.value)}>
<option value={TOKENS.SOL}>SOL</option>
<option value={TOKENS.USDC}>USDC</option>
<option value={TOKENS.USDT}>USDT</option>
<option value={TOKENS.JUP}>JUP</option>
</select>
</div>

<div className="output-section">
<div className="estimated-output">
{quote ? (
<>
<span className="amount">{(quote.outAmount / 1e6).toFixed(6)}</span>
<span className="price-impact">
Price impact: {(quote.priceImpactPct * 100).toFixed(2)}%
</span>
</>
) : (
<span className="loading">Fetching quote...</span>
)}
</div>
<select value={outputMint} onChange={(e) => setOutputMint(e.target.value)}>
<option value={TOKENS.USDC}>USDC</option>
<option value={TOKENS.USDT}>USDT</option>
<option value={TOKENS.SOL}>SOL</option>
<option value={TOKENS.JUP}>JUP</option>
</select>
</div>

<button
onClick={handleSwap}
disabled={loading || !quote}
className="swap-button"
>
{loading ? 'Swapping...' : 'Swap'}
</button>

{quote && (
<div className="route-info">
<div>Route: {quote.routePlan?.map((r: any) => r.swapInfo.label).join(' → ')}</div>
<div>Min output: {((quote.outAmount * 0.995) / 1e6).toFixed(6)} (0.5% slippage)</div>
</div>
)}
</div>
);
}

UX enhancements:

  • Real-time quote updates (auto-refresh)
  • Price impact warnings (greater than 5% highlighted)
  • Route visualization (which venues are used)
  • Minimum output calculation (slippage tolerance display)

Production deployment checklist

Pre-deployment validation

CheckWhy it mattersHow to verify
Program ID fixedReproducible buildsanchor build --verifiable
Upgrade authorityImmutability post-auditsolana program set-upgrade-authority <program_id> --final
Config adminEmergency controlsMultisig or DAO-controlled
Platform fee ≤ 1%Competitive with alternativesReview platform_fee_bps
Max slippage reasonableProtect usersTypically 50-500 BPS
Pause mechanism testedKill-switch worksIntegration test coverage
Token-2022 testedFee-on-transfer handlingTest with USDT (transfer fees)

Deployment steps

# 1. Build verifiable program
anchor build --verifiable

# 2. Deploy to devnet
anchor deploy --provider.cluster devnet

# 3. Initialize config (via multisig in production)
anchor run initialize-config --provider.cluster devnet

# 4. Verify deployment
solana program show <program_id>

# 5. Audit & security review
# (Use Sec3, OtterSec, or similar)

# 6. Deploy to mainnet
anchor deploy --provider.cluster mainnet-beta

# 7. Initialize mainnet config
anchor run initialize-config --provider.cluster mainnet-beta

# 8. Set upgrade authority to final
solana program set-upgrade-authority <program_id> --final

Operational runbook: support & incident handling

When swaps fail or users contact support, you need immediate answers. This runbook maps symptoms to root causes.

Required debug information

Every support ticket needs:

  • Transaction signature (if swap was attempted)
  • Intent ID (from client logs or user session)
  • Wallet address
  • Input/output mints
  • Timestamp (when issue occurred)

Failure classification matrix

SymptomRoot causeDiagnosisImmediate mitigation
"Slippage tolerance exceeded"Quote stale OR low liquidityCheck quote_age_seconds in eventShorten quote refresh interval to 5-10s
"Transaction simulation failed"Compute budget exceededCheck route complexity (hops >3)Bump compute units to 400k for complex routes
"Account not found"ALT missing or not loadedCheck addressLookupTableAccounts in txnEnsure ALTs created and extended with pool accounts
"Insufficient funds"User balance < amount + feesCheck wallet balance vs amount_inShow clear error: "Need X SOL for fees"
"Custom program error: 0x1770"Token-2022 transfer feeCheck if token has transfer fee extensionUse vault delta verification (not instruction data)
"Transaction timeout"Network congestionCheck priority fee paidIncrease priority fee dynamically (use Helius API)
"Invalid instruction data"Jupiter program upgradedCheck program version mismatchUpdate Jupiter program ID constant
Swap succeeds but user didn't receive full amountToken-2022 fee-on-transferCompare quote vs actual receivedDocument this in UI ("Receives ~X after fees")

Incident response playbook

Scenario 1: Sudden spike in failures (greater than 10% failure rate)

Immediate actions:

  1. Check Solana network status (https://status.solana.com)

  2. Verify Jupiter API health (https://status.jup.ag)

  3. Query last 100 failures by error_code:

    SELECT error_code, COUNT(*) as count
    FROM swap_failed_events
    WHERE timestamp > NOW() - INTERVAL '1 hour'
    GROUP BY error_code
    ORDER BY count DESC;
  4. If error_code=1005 (compute exceeded): Bump compute budget globally

  5. If error_code=1002 (stale quote): Reduce quote refresh interval

  6. If widespread network issue: Enable pause toggle via admin

Scenario 2: User reports "missing tokens"

Debug flow:

  1. Get transaction signature → check on Solscan/Explorer
  2. Verify transaction succeeded (success or failure)
  3. If succeeded:
    • Check JupiterSwapEvent.amount_out
    • Compare to user's token account balance
    • Check for Token-2022 transfer fees (some tokens deduct on transfer)
  4. If failed:
    • Check JupiterSwapFailedEvent.error_code
    • Map to human-readable explanation
    • Guide user on fix (refresh quote, increase slippage, etc.)

Scenario 3: Revenue suddenly drops

Diagnostic queries:

-- Check if swap volume dropped or fee collection failed
SELECT
DATE_TRUNC('hour', timestamp) as hour,
COUNT(*) as swap_count,
SUM(amount_in) as total_volume,
SUM(platform_fee) as revenue
FROM swap_events
WHERE timestamp > NOW() - INTERVAL '24 hours'
GROUP BY hour
ORDER BY hour DESC;

Possible causes:

  • Fee collection logic broken (check program logs)
  • Users bypassing your wrapper (check if they're using Jupiter directly)
  • Platform fee set to 0 accidentally (check config PDA)

Proactive monitoring alerts

Set up alerts for:

  • Failure rate greater than 5% for 10 minutes
  • Quote-to-swap conversion less than 70% (indicates UX friction)
  • Median execution latency greater than 30 seconds (quote staleness)
  • Zero swaps for 15 minutes (system down or paused)
  • Platform fee revenue drops greater than 50% hour-over-hour

Common pitfalls and solutions (reference)

1. Transaction size limits (exceeded max accounts)

Problem: Complex Jupiter routes require 30-50 accounts, exceeding transaction limits.

Solution: Use Address Lookup Tables (ALTs)

// Create ALT during program initialization
const [lookupTable, _] = AddressLookupTableProgram.createLookupTable({
authority: admin,
payer: admin,
recentSlot: await connection.getSlot(),
});

// Add frequently-used accounts
await connection.sendTransaction(
AddressLookupTableProgram.extendLookupTable({
lookupTable,
authority: admin,
payer: admin,
addresses: [USDC_MINT, USDT_MINT, /* common pools */],
})
);

2. Slippage errors on mainnet (works on devnet)

Problem: Mainnet has higher volatility and MEV, causing more slippage failures.

Solution: Dynamic slippage based on liquidity

function calculateSlippage(quote: JupiterQuote) {
const priceImpact = quote.priceImpactPct;

if (priceImpact < 0.01) return 50; // 0.5% for deep liquidity
if (priceImpact < 0.05) return 100; // 1% for medium liquidity
return 500; // 5% for low liquidity
}

3. Token-2022 fee-on-transfer not accounted

Problem: USDT (SPL Token-2022 with transfer fees) results in less than expected amounts.

Solution: Always observe vault deltas

let before = token_account.amount;
// ... execute transfer ...
token_account.reload()?;
let actual_received = token_account.amount.saturating_sub(before);

4. Quote expiration (stale routes)

Problem: User reviews swap for 60 seconds, quote becomes stale, transaction fails.

Solution: Auto-refresh quotes

useEffect(() => {
const interval = setInterval(refreshQuote, 10000); // Every 10s
return () => clearInterval(interval);
}, [inputMint, outputMint, amount]);

5. Insufficient compute budget

Problem: Complex routes run out of compute units (200k default).

Solution: Request higher compute budget

use solana_program::compute_budget::ComputeBudgetInstruction;

// Add as first instruction in transaction
let compute_ix = ComputeBudgetInstruction::set_compute_unit_limit(400_000);

Performance optimization

Account lookup optimization

// Inefficient: Multiple account lookups
for account in ctx.remaining_accounts.iter() {
let data = account.try_borrow_data()?;
// Process...
}

// Efficient: Single borrow per account
let accounts: Vec<_> = ctx.remaining_accounts
.iter()
.map(|a| (a.key(), a.try_borrow_data()))
.collect();

Fee calculation (avoid division)

// Slower: Division
let fee = (amount * fee_bps) / 10000;

// Faster: Shift (if fee_bps is power of 2)
// For 0.5% (50 bps): multiply by 1/200 = right shift by ~8
// Not always applicable, but pattern to consider

Frontend quote batching

// Sequential quotes (slower)
const quote1 = await getQuote({ inputMint: SOL, outputMint: USDC, amount: 1e9 });
const quote2 = await getQuote({ inputMint: SOL, outputMint: USDT, amount: 1e9 });

// Parallel quotes (faster)
const [quote1, quote2] = await Promise.all([
getQuote({ inputMint: SOL, outputMint: USDC, amount: 1e9 }),
getQuote({ inputMint: SOL, outputMint: USDT, amount: 1e9 }),
]);

Monitoring and analytics

On-chain events (CRM/ops-grade telemetry)

Your event schema defines what you can measure and debug. Make it comprehensive:

#[event]
pub struct JupiterSwapEvent {
// Attribution
pub intent_id: [u8; 16], // UUID bytes for client-side correlation
pub user: Pubkey, // Wallet address
pub client_version: [u8; 32], // App version (e.g., "web-v2.1.3\0\0...")

// Swap details
pub input_mint: Pubkey,
pub output_mint: Pubkey,
pub amount_in: u64,
pub amount_out: u64,
pub platform_fee: u64,

// Execution context
pub quote_timestamp: i64, // When quote was generated (detect staleness)
pub execution_timestamp: i64, // When swap executed
pub route_hash: u64, // Hash of route plan (fingerprint venues used)
pub slippage_bps_requested: u16, // User-requested slippage
pub slippage_bps_effective: u16, // Actual slippage observed

// Operational data
pub compute_units_consumed: u64, // For performance tuning
pub priority_fee_paid: u64, // MEV/congestion analysis
}

#[event]
pub struct JupiterSwapFailedEvent {
// Attribution (same as success event)
pub intent_id: [u8; 16],
pub user: Pubkey,
pub client_version: [u8; 32],

// Failure context
pub input_mint: Pubkey,
pub output_mint: Pubkey,
pub amount_in: u64,
pub minimum_amount_out: u64,

// Diagnostic data
pub error_code: u32, // Mapped to human-readable reasons
pub program_error: Option<String>, // Anchor error details
pub quote_age_seconds: i64, // How old was the quote?
pub timestamp: i64,
}

Why these fields matter:

  • intent_id: Join client logs, backend DB, and on-chain events for full trace
  • client_version: Identify bugs introduced in specific releases
  • quote_timestamp vs execution_timestamp: Measure latency, detect stale quotes
  • route_hash: Identify which venue combinations succeed/fail most
  • slippage_bps_effective: Measure if users are over-allocating slippage tolerance
  • compute_units_consumed: Optimize compute budgets dynamically
  • error_code: Build dashboards showing top failure reasons

Failure telemetry (first-class error logging)

Most teams only emit success events. Failures are more valuable for ops:

pub fn jupiter_swap<'info>(
ctx: Context<'_, '_, 'info, 'info, JupiterSwap<'info>>,
intent_id: [u8; 16],
amount_in: u64,
minimum_amount_out: u64,
) -> Result<()> {
// ... validation and swap execution ...

// If swap fails, emit failure event BEFORE returning error
if let Err(e) = execute_jupiter_cpi(&ctx, amount_in) {
emit!(JupiterSwapFailedEvent {
intent_id,
user: ctx.accounts.user.key(),
client_version: get_client_version(),
input_mint: ctx.accounts.input_mint.key(),
output_mint: ctx.accounts.output_mint.key(),
amount_in,
minimum_amount_out,
error_code: map_error_to_code(&e),
program_error: Some(e.to_string()),
quote_age_seconds: calculate_quote_age(),
timestamp: Clock::get()?.unix_timestamp,
});
return Err(e);
}

// Success: emit success event
emit!(JupiterSwapEvent { /* ... */ });
Ok(())
}

Error code mapping (for structured dashboards):

fn map_error_to_code(error: &anchor_lang::error::Error) -> u32 {
match error {
JupiterSwapError::MinimumOutputNotMet => 1001,
JupiterSwapError::StaleQuote => 1002,
JupiterSwapError::JupiterPaused => 1003,
JupiterSwapError::InsufficientLiquidity => 1004,
JupiterSwapError::ComputeBudgetExceeded => 1005,
JupiterSwapError::ALTMissing => 1006,
JupiterSwapError::Token2022TransferFee => 1007,
_ => 9999, // Unknown error
}
}

Dashboard-ready KPI definitions

These metrics map directly to SQL queries and BI dashboards:

KPIFormulaQuery exampleTarget
Quote → Swap conversioncompleted_swaps / quote_requestsSELECT COUNT(DISTINCT intent_id) FROM swaps / COUNT(*) FROM quotesgreater than 80%
Intent completion ratecompleted_intents / created_intentsSELECT successful / total FROM intent_summarygreater than 90%
Failure rate by reasonfailures(reason=X) / total_attemptsSELECT error_code, COUNT(*) / total FROM failures GROUP BY error_codeless than 5% overall
Median execution latencymedian(event_time - intent_creation_time)SELECT PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY latency) FROM swapsless than 15s
Revenue (USD)SUM(platform_fee * token_price_usd)SELECT SUM(f.amount * p.price) FROM fees f JOIN prices p ON f.mint = p.mintTrack growth
Route health scoresuccess_rate_per_route_fingerprintSELECT route_hash, COUNT(*) successes / total FROM swaps GROUP BY route_hashgreater than 95% per route
Effective slippageAVG((quote_out - actual_out) / quote_out * 10000)SELECT AVG((quoted - actual) / quoted * 10000) FROM swapsless than 50 BPS
Repeat user rateusers_with_2plus_swaps / total_usersSELECT COUNT(DISTINCT user) FROM (SELECT user, COUNT(*) c FROM swaps GROUP BY user HAVING c >= 2)greater than 40%

Sample dashboard SQL (Postgres):

-- Real-time conversion funnel
WITH funnel AS (
SELECT
COUNT(DISTINCT q.intent_id) as quotes,
COUNT(DISTINCT s.intent_id) as swaps,
COUNT(DISTINCT CASE WHEN s.success = true THEN s.intent_id END) as successful
FROM quote_requests q
LEFT JOIN swap_events s ON q.intent_id = s.intent_id
WHERE q.created_at > NOW() - INTERVAL '1 hour'
)
SELECT
quotes,
swaps,
successful,
ROUND(100.0 * swaps / NULLIF(quotes, 0), 2) as quote_to_swap_pct,
ROUND(100.0 * successful / NULLIF(swaps, 0), 2) as success_rate
FROM funnel;

-- Top failure reasons (last 24 hours)
SELECT
CASE error_code
WHEN 1001 THEN 'Slippage exceeded'
WHEN 1002 THEN 'Stale quote'
WHEN 1005 THEN 'Compute budget exceeded'
WHEN 1006 THEN 'ALT missing'
WHEN 1007 THEN 'Token-2022 transfer fee'
ELSE 'Unknown'
END as failure_reason,
COUNT(*) as occurrences,
ROUND(100.0 * COUNT(*) / SUM(COUNT(*)) OVER (), 2) as percentage
FROM swap_failed_events
WHERE timestamp > NOW() - INTERVAL '24 hours'
GROUP BY error_code
ORDER BY occurrences DESC;

-- Revenue by token pair (last 7 days)
SELECT
s.input_mint,
s.output_mint,
COUNT(*) as swap_count,
SUM(s.platform_fee) as total_fee_tokens,
SUM(s.platform_fee * p.price_usd) as revenue_usd
FROM swap_events s
JOIN token_prices p ON s.output_mint = p.mint
WHERE s.timestamp > NOW() - INTERVAL '7 days'
GROUP BY s.input_mint, s.output_mint
ORDER BY revenue_usd DESC
LIMIT 10;

Grafana/Metabase integration:

  • Create alerts on conversion rate drop (greater than 10% decrease)
  • Dashboard panels: conversion funnel, failure reasons pie chart, revenue time series
  • User cohort analysis: new vs returning users by swap count

Indexing with Helius/Hellomoon

// Subscribe to program logs
const connection = new Connection(HELIUS_RPC_URL);

connection.onLogs(
programId,
(logs) => {
if (logs.logs.some(log => log.includes('JupiterSwapEvent'))) {
// Parse event and store in database
const event = parseJupiterSwapEvent(logs);
await db.swaps.insert(event);
}
},
'confirmed'
);

Cost analysis

Transaction costs (mainnet, Dec 2024)

OperationCompute unitsTypical cost (SOL)Notes
Init config~5,0000.000005One-time
Simple swap (1 hop)~50,0000.00005Direct pool
Complex swap (3+ hops)~200,0000.0002Multi-route
Update config~3,0000.000003Admin only

Priority fees: Add 0.00001-0.0001 SOL during congestion for faster inclusion.

Platform revenue model

Example with 0.3% platform fee:

  • User swaps 100 SOL → USDC
  • Jupiter finds route yielding 9,500 USDC
  • Platform collects: 9,500 × 0.003 = 28.5 USDC
  • User receives: 9,471.5 USDC

At 1M SOL monthly volume (current mid-tier DEX):

  • Revenue: **30,000/month(assuming0.330,000/month** (assuming 0.3% fee, 100 SOL)
  • Competitive with 0.01-0.1% range most aggregators use

Future enhancements

1. Limit orders via DCA (Dollar-Cost Averaging)

Jupiter DCA allows scheduled swaps:

// Create DCA order
const dcaOrder = await program.methods
.createDcaOrder({
inputMint: SOL,
outputMint: USDC,
amountPerSwap: 10 * 1e9, // 10 SOL
interval: 3600, // 1 hour
totalAmount: 1000 * 1e9, // 1000 SOL total
})
.rpc();

2. MEV protection via private routing

Integrate Jupiter's private RPC:

const quote = await getQuote({
inputMint,
outputMint,
amount,
// Use private RPC to avoid frontrunning
rpcUrl: 'https://private.jup.ag/rpc',
});

3. Cross-chain swaps (via Wormhole)

Jupiter integrates Wormhole for cross-chain swaps:

pub fn cross_chain_swap(
ctx: Context<CrossChainSwap>,
destination_chain: u16, // e.g., 1 = Ethereum
amount: u64,
) -> Result<()> {
// Swap SOL → USDC on Solana
// Bridge USDC to Ethereum via Wormhole
// Swap USDC → ETH on Ethereum
Ok(())
}

4. Liquidity aggregation metrics

Show users why Jupiter found better price:

interface RouteBreakdown {
venue: string;
percentage: number; // % of trade routed through this venue
priceImpact: number;
}

const breakdown: RouteBreakdown[] = [
{ venue: 'Raydium CLMM', percentage: 60, priceImpact: 0.12 },
{ venue: 'Orca Whirlpool', percentage: 30, priceImpact: 0.08 },
{ venue: 'Meteora DLMM', percentage: 10, priceImpact: 0.05 },
];

Comparison: direct Jupiter API vs program wrapper

AspectDirect APIProgram wrapper (this guide)
Implementation time2 hours1 week (with tests)
Fee collectionmanual off-chainautomatic on-chain
Composabilitylimitedfull CPI support
Audit surfacenoneprogram code
Brandingclient-side onlyon-chain enforcement
Access controlnoneadmin-gated
Event logsnoneon-chain events
Upgrade pathN/Aversioned program

When to use direct API:

  • Quick prototypes
  • Personal tools
  • No fee collection needed

When to use program wrapper:

  • Platform/product launch
  • Need fee revenue
  • Want composability with other protocols
  • Require audit for institutional users

Resources

Official documentation

Code examples

Community


Production integration checklist

Use this checklist before deploying to mainnet:

Pre-deployment

  • Intent ID propagated end-to-end: Client generates UUID → passed to program → included in events
  • Client version tracking: App version captured in all events for release correlation
  • Quote refresh mechanism: Auto-refresh every 10s; warn user if quote >30s old
  • Slippage calculation: Dynamic slippage based on liquidity depth (not hardcoded)
  • Quote staleness guard: Validate quote_timestamp on backend before execution

Smart contract

  • CPI remaining accounts tested: Verified with 2-hop, 3-hop, and 5-hop routes
  • Token-2022 vault delta validation: Output amount observed from token account change, not instruction data
  • ALT/v0 transaction support: Complex routes (>20 accounts) tested with Address Lookup Tables
  • Platform fee collection: Verified fees transfer to correct account on every swap
  • Slippage enforcement: On-chain max slippage cap cannot be bypassed by client
  • Pause toggle works: Admin can halt swaps; verified in integration tests
  • Overflow protection: All fee calculations use checked_mul / checked_div

Telemetry & observability

  • Success events comprehensive: Include intent_id, route_hash, slippage_effective, compute_units
  • Failure events captured: Emit SwapFailedEvent with error_code before returning errors
  • Error code mapping: All program errors map to documented reason codes (1001-1007+)
  • Event indexer running: Helius/Hellomoon/custom indexer writes events to database
  • Dashboard metrics live: Conversion rate, failure breakdown, revenue tracking operational

Operational readiness

  • Support runbook documented: Team knows how to classify failures by error code
  • Admin key security: Stored in hardware wallet or multisig (not hot wallet)
  • Monitoring alerts configured: Failure rate greater than 5%, conversion less than 70%, revenue drops
  • Incident response plan: Who to contact for Jupiter API issues, network outages
  • User-facing error messages: Map error codes to helpful guidance (e.g., "Quote expired. Refresh and retry.")

Testing

  • Mainnet-like environment: Tested on devnet with realistic routes and tokens
  • Token-2022 tokens tested: USDT (transfer fee), BONK (token extensions)
  • High-compute routes tested: 5+ hop routes with compute budget adjustments
  • Failure scenarios tested: Stale quote, slippage exceeded, insufficient balance
  • End-to-end user flow: Quote → Intent → Execute → Event → Dashboard (full trace)

Deployment

  • Verifiable build: anchor build --verifiable succeeds; hash matches deployed program
  • Config PDA initialized: Admin, fee account, platform fee, max slippage set correctly
  • Frontend pointing to correct program: Program ID hardcoded matches deployed address
  • Upgrade authority finalized: Set to multisig or --final after audit

Conclusion

Building a Jupiter integration on Solana requires:

  1. Solid Anchor fundamentals (PDAs, CPIs, account validation)
  2. Token-2022 awareness (vault deltas, not instruction amounts)
  3. Transaction size management (ALTs for complex routes)
  4. Comprehensive testing (unit + integration + E2E)
  5. Production-grade monitoring (events, metrics, alerts)

The result is a composable, fee-collecting swap infrastructure that leverages Jupiter's best-in-class routing while maintaining control over user experience and revenue.


Key takeaways:

  • Jupiter abstracts away liquidity fragmentation (20+ venues)
  • Program wrappers enable fee collection and composability
  • Token-2022 compatibility is non-negotiable in 2024+
  • Address Lookup Tables are essential for complex routes
  • Testing prevents costly mainnet bugs (each fix costs 0.5-1 SOL)

Ship safe.

AMM vs CPAMM on Solana: constant product vs CLMM, DLMM, PMM, and order books

· 15 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Solana runtime constraints: accounts, locks, and contention

TL;DR

  • AMM is the category: any on-chain venue that computes prices algorithmically from state (reserves, parameters, or inventory).
  • CPAMM / CPMM is one specific AMM family: constant product with invariant x · y = k (Uniswap v2-style).
  • The useful comparison is CPAMM vs other liquidity designs:
    • StableSwap (stable/correlated assets),
    • CLMM (concentrated liquidity / ticks),
    • DLMM (bin-based liquidity + often dynamic fees),
    • PMM / oracle-anchored (proactive quoting around a “fair price”),
    • CLOB (order books),
    • TWAMM (time-sliced execution),
    • bonding-curve launch mechanisms (virtual reserves → migration).
  • On Solana, the tradeoffs are heavily shaped by:
    • account write locks / hot accounts (parallelism vs contention),
    • Token-2022 extensions (transfer fees/hooks can break naive “amount_in == amount_received” math),
    • router-first distribution (aggregator integration matters),
    • MEV & atomic execution tooling (bundles / private routes / quote freshness).

AMM vs CPAMM (and why the wording matters)

AMM (the umbrella)

An AMM is any on-chain market maker that:

  • holds on-chain state (reserves, inventory, parameters),
  • updates price algorithmically,
  • executes swaps against that state.

An on-chain order book can be fully on-chain too, but it’s not an AMM: it matches explicit bids/asks, not a curve/invariant rule.

CPAMM / CPMM (the subtype)

A CPAMM is a constant-function AMM where:

xy=kx \cdot y = k

x and y are pool reserves.

So:

  • all CPAMMs are AMMs
  • not all AMMs are CPAMMs

CPAMM mechanics in one screen (math + semantics)

Let reserves be (x, y) and you swap dx of X for Y.

Fee model (input fee)

If fee is f (e.g. 0.003 for 30 bps):

dx=dx(1f)dx' = dx \cdot (1 - f)

Output

dy=ydxx+dxdy = \frac{y \cdot dx'}{x + dx'}

Reserve update

  • x := x + dx
  • y := y - dy

Observed vault delta (Token-2022-safe input amount):

dxeff=vaultaftervaultbeforedx_{eff} = vault_{after} - vault_{before}

Price intuition (useful when comparing designs)

  • Spot price (ignoring fees): p ≈ y/x (direction depends on quote convention)
  • For small trades, slippage is roughly proportional to trade size / liquidity depth.
  • Fees retained in the pool tend to increase k over time (LPs get paid via reserve growth).

Comparison tables

Taxonomy and “what is being compared?”

Term / designCategory?Core ideaTypical on-chain stateWho provides liquidity?Quote source
AMMYesAlgorithmic pricing vs statevariesvariescurve/parameters/inventory
CFAMM (constant-function AMM)YesTrades move along an invariantreserves + paramsLPs or protocolinvariant
CPAMM / CPMMYesx*y=k2 vaults + pool state (+ LP mint)passive LPsreserves ratio
StableSwapYeshybrid curve (sum-like near peg)vaults + params (A, etc.)passive LPscurve + params
CLMMYesliquidity concentrated in ranges/ticksvaults + tick arrays + position accountsactive LPsticks + reserves
DLMM (bins)Yesdiscrete bins + liquidity distributionvaults + bin arrays + position stateactive/semi-active LPsbins + params
PMM / oracle-anchoredYesprice anchored to oracle fair valueinventory + params + oracle feedsmarket maker / protocoloracle + model
CLOB (order book)No (not AMM)match bids/asksmarket + order statemakerslimit orders
TWAMMNo (mechanism)execute large order over timelong-term order statetrader ordersschedule
Bonding curve launchYes (often)virtual reserves / issuance curvecurve params + reserveslaunch poolcurve

Trader view: execution quality & UX

DesignTypical spread / slippage (for same TVL)“Always liquid”?Best forPain points (trader)Router friendliness
CPAMMworst for tight marketsYeslong-tail discovery, simple swapshigh price impact without huge TVLhigh (simple routes)
StableSwapexcellent near pegYes (until extreme imbalance)stable/stable, correlated assetsparameter risk; off-peg behaviorhigh
CLMMbest near spotNo (can be out-of-range)majors, low slippagedepth depends on LP rangeshigh (but more accounts)
DLMMvery good when bins are well-setmostlystructured liquidity & dynamic feesbin distribution mattershigh (but more accounts)
PMMpotentially excellentdepends on MM inventorymajors & flow-driven quotingoracle/model risk; opaque behaviorhigh if integrated (RFQ-like)
CLOBbest when book is thickn/apro trading, limit ordersneeds makers & incentivesmedium/high (depends on infra)
TWAMMoptimized for large ordersn/asize executionnot instantrouted as a strategy leg
Bonding curvedeterministic but can be harshcurve-dependentlaunchescan be gamed / MEV-heavyusually “launch-only”

LP view: risk, complexity, and who wins when

DesignLP position typeCapital efficiencyIL profileOperational complexityWho tends to outperform?
CPAMMfungible LP tokenlowclassic IL (full range)lowpassive LPs in long-tail / high fees
StableSwapoften fungible LPhigh near pegsmaller IL near pegmediumLPs in correlated pairs
CLMMtokenized/NFT-like positionvery highcan be worse if misrangedhighsophisticated LPs / managed vaults
DLMMbin/strategy position statehigh (configurable)strategy-dependentmedium/highstrategy LPs; can be “MM-like”
PMMusually MM-managed inventoryhighmodel-controlledhighmarket makers (not passive LPs)
CLOBmaker ordersn/ainventory risk, not ILhighprofessional makers
Bonding curvenot traditional LPn/an/amediumlaunch designers + snipers (unless mitigated)

Solana runtime view: contention, accounts, compute

This is the table people skip, but it often determines what scales.

DesignWhat gets written per swap?Hot-account tendencyParallelism shapeTx/account footprintNotes
CPAMMsame pool state + both vaultshighmany swaps serialize on same poollow/mediumsimplest, but hotspot-prone
StableSwapsame as CP-ish + paramshighsimilar to CP contentionmediummore compute than CP
CLMMvaults + tick arrays + position-related statemediumcan shard via tick arrayshighermore accounts; better scaling shape
DLMMvaults + active bin(s) + paramsmediumcan shard by binshigherdepends on bin layout
PMMinventory + oracle state + paramslow/mediumdepends on designmediumquote updates may dominate
CLOBmarket state + order matching statevariesdepends on matching engine designhighcrankless helps UX
TWAMMlong-term order state + execution legsn/atime-slicedmedium/highoften pairs with CLOB/AMM legs

Parameter surface area (“knobs you must ship and maintain”)

DesignParameters you can’t ignoreTuning difficultyCommon footguns
CPAMMfee bps, min liquidity lock, rounding ruleslowoverflow in x*y, wrong deposit proportionality
StableSwapamplification A, fee(s), admin fees, rampingmedium/highbad A → fragility near peg/off-peg
CLMMtick spacing, fee tier(s), init price, range UXhightick array provisioning, out-of-range UX
DLMMbin step, dynamic fee curve, rebalancing rulesmedium/highbin skew → bad execution; edge-bin depletion
PMMoracle choice, spread model, inventory/risk limitsvery highstale oracle, model blowups, adversarial flow
CLOBtick size, lot size, maker/taker fees, risk limitshighdust orders, spam, maker incentives
Bonding curvevirtual reserves, slope, caps, migration ruleshighsniping, MEV extraction, mispriced curve

Token-2022 / “non-standard token semantics” compatibility

Token-2022 extensions change what “amount in” means.

Token featureWhat breaks in naive AMMsSafe patternDesigns most sensitive
Transfer feeamount_in ≠ vault deltacompute dx = vault_after - vault_beforeall curve AMMs
Transfer hookextra logic executed on transferstrict account lists; avoid re-entrancy assumptionsall; especially CPI-heavy
Confidential transfersyou can’t observe amounts easilyoften incompatible without special supportmost AMMs
Interest-bearingbalances drift over timeuse observed balances; avoid cached reservesall pool AMMs
Memo/metadata extusually fineno-opnone

Rule of thumb: if you don’t base math on observed vault deltas, you’re designing for 2019 SPL Token semantics.


MEV & adversarial flow profile

DesignSandwich susceptibility“Pick-off” riskMitigations that actually workNotes
CPAMMhighhighprivate routing, tighter fees, better routing, smaller hopspassive curve is easy to arb
StableSwapmediummediumsimilar; parameter robustnessoff-peg events get brutal
CLMMmediumhigh (LPs)managed LP vaults; dynamic feesLPs can get wrecked by volatility
DLMMmediummedium/highdynamic fees, bin strategydepends on fee model
PMMlow/mediummediumoracle + inventory + RFQ-style routing“MM-like” behavior
CLOBmediummediummaker protections, anti-spam, risk controlsdepends on market design
Bonding curvevery highvery highanti-bot design + fair launch mechanicslaunch is an MEV magnet

“Which one should I choose?” (builder POV)

If your goal is…PickBecauseBut be honest about…
ship fastest + minimal stateCPAMMsimplest accounts & mathcontention + worse execution unless TVL is high
best majors execution with public LPsCLMMcapital efficiency near spotposition UX + account explosion
stable pairs / correlated assetsStableSwaplow slippage near pegparameter tuning & off-peg behavior
strategy-friendly liquidityDLMMbins + dynamic fees can match volatilitybin UX + more moving parts
tight quotes controlled by MM logicPMMcan beat passive curvesoracle/model risk is the product
limit orders + pro featuresCLOBexplicit bids/asksmaker bootstrapping + ops complexity
reduce impact of whale flowTWAMM (+ a venue)time-slicingneeds execution infra
token launch discovery pathBonding curve → migratedeterministic launch → deep liquidity laterlaunch MEV + migration design

A “migration path” table (how protocols evolve in practice)

PhaseTypical mechanismWhy it fitsWhat you usually add next
launch / discoverybonding curve / small CP poolsimple, deterministicanti-bot + migration
early liquidityCPAMMeasy integrationsmultiple fee tiers / incentives
scaling majorsCLMM or DLMMbetter executionmanaged LP vaults
pro tradingCLOBlimit orderscross-margin/perps
flow optimizationPMM / RFQ-likebest execution for routed flowprivate routing + inventory mgmt
large order UXTWAMMreduces impactbundle/atomic strategies

Anchor CPAMM: the “don’t ship this” checklist (most common bugs)

1) Proportional deposits are ratios, not products

If you want users to deposit proportionally, you preserve:

  • amount_a / amount_b ≈ reserve_a / reserve_b

A clamp-style approach:

Δb=Δareservebreservea\Delta b = \Delta a \cdot \frac{reserve_b}{reserve_a}

and then you clamp the other side if user supplies less.

2) LP minting: sqrt(Δa·Δb) is bootstrap-only

For subsequent deposits, use proportional minting:

liquidity=min(Δasupplyreservea,Δbsupplyreserveb)liquidity = \min\left( \frac{\Delta a \cdot supply}{reserve_a}, \frac{\Delta b \cdot supply}{reserve_b} \right)

Otherwise LP shares drift and you can mint unfairly.

3) Invariant checks must be A·B and must use u128

If you verify k, do:

  • new_x * new_y >= old_k (often allowing rounding to favor LPs)
  • compute with u128 intermediates.

4) Token-2022: do not trust amount_in

For fee-on-transfer tokens:

  • the only safe dx is vault_after - vault_before.

Minimal “correct CPAMM math” snippet (overflow-safe, vault-delta friendly)

/// Compute CPAMM output (dy) from reserves (x, y) and effective input (dx_eff),
/// using u128 intermediates to avoid u64 overflow.
///
/// IMPORTANT (Token-2022):
/// - If the token can take a transfer fee, compute dx_eff from observed vault delta:
/// dx_eff = vault_x_after - vault_x_before
pub fn cpamm_out_amount(x: u64, y: u64, dx_eff: u64) -> u64 {
let x = x as u128;
let y = y as u128;
let dx = dx_eff as u128;

// dy = (y * dx) / (x + dx)
let den = x + dx;
if den == 0 {
return 0;
}

let dy = (y * dx) / den;
dy.min(u64::MAX as u128) as u64
}

Extra comparison tables (for the “systems” view)

Public API ergonomics: what you expose to integrators

Design“Simple swap” interfaceQuote interfaceCommon integration shapeGotcha
CPAMMswap(amount_in, min_out)deterministic from reservesdirect CPIneed observed deltas for Token-2022
CLMMsame, but more accountstick-dependentSDK computes accountsaccount list errors are common
DLMMsimilarbin-dependent + dynamic feeSDK requiredbin selection correctness matters
PMMoften RFQ-likeoracle + MM paramsrouter integration is key“quote freshness” is the product
CLOBorder placementbook dataoff-chain client + on-chain settlemaker ops are non-trivial

Testing strategy: what to property-test per design

DesignInvariants to testEdge casesSuggested approach
CPAMMk non-decrease (fee), no negative reservesrounding, overflow, zero-liquidityproperty tests with random swaps
StableSwapmonotonicity near peg, conservationextreme imbalance, A rampsfuzz + numerical bounds
CLMMtick crossing correctness, fee growthboundary ticks, out-of-rangedifferential tests vs reference
DLMMbin transitions, dynamic fee functionbin depletion, fee spikesfuzz + scenario sims
PMMoracle staleness handling, risk limitsoracle outages, adversarial flowsimulation + kill-switch tests
CLOBmatching engine correctnessself-trade, partial fillsdeterministic replay tests

References (URLs)

Tokio vs tokio-stream in WebSocket adapters - stream-first vs select!

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR

  • Tokio is the runtime and low-level primitives (tasks, I/O, timers, channels, tokio::select!).
  • tokio-stream is an optional companion that:
    • wraps Tokio primitives into Streams (e.g., ReceiverStream, BroadcastStream, IntervalStream);
    • provides combinators (map, filter, merge, timeout, throttle, chunks_timeout, StreamMap) for declarative event pipelines.
  • If your adapter pulls from channels with recv().await and coordinates with select!, you usually don’t need tokio-stream.
  • If your adapter exposes or composes Streams (fan-in, time windows, per-item timeouts, etc.), you do.

What each crate gives you

Tokio (runtime + primitives)

  • #[tokio::main], tokio::spawn, tokio::select!
  • Channels: tokio::sync::{mpsc, broadcast, watch, oneshot}
  • Time: tokio::time::{sleep, interval, timeout}
  • Signals: tokio::signal
  • Typical style: “manual pump” with recv().await inside a select! loop.

tokio-stream (adapters + combinators)

  • Wrappers (Tokio → Stream):
    • wrappers::ReceiverStream<T>mpsc::Receiver<T>
    • wrappers::UnboundedReceiverStream<T>
    • wrappers::BroadcastStream<T>broadcast::Receiver<T>
    • wrappers::WatchStream<T>watch::Receiver<T>
    • wrappers::IntervalStreamtokio::time::Interval
  • Combinators via StreamExt: next, map, filter, merge (with SelectAll), StreamMap (keyed fan-in), and time-aware ops (timeout, throttle, chunks_timeout) when the crate’s time feature is enabled.

Two idioms for adapters (with complete snippets)

1) Channel + select! (“manual pump”) — no tokio-stream needed

use tokio::{select, signal, sync::mpsc};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
let (tx, mut rx) = mpsc::channel::<String>(1024);

// Example producer
tokio::spawn(async move {
let _ = tx.send("hello".to_string()).await;
});

let mut sigint = signal::ctrl_c();

loop {
select! {
maybe = rx.recv() => {
match maybe {
Some(msg) => { tracing::info!("msg: {msg}"); }
None => break, // channel closed
}
}
_ = &mut sigint => {
tracing::info!("shutting down");
break;
}
else => break,
}
}

Ok(())
}

Pros

  • Minimal dependencies, explicit control and shutdown.
  • Clear backpressure semantics via channel capacity.

Cons

  • Fan-in across many/dynamic sources is verbose.
  • Transformations (map/filter/batch) are hand-rolled.

use std::time::Duration;
use tokio::sync::mpsc;
use tokio_stream::{
wrappers::{ReceiverStream, IntervalStream},
StreamExt, // for .next() and combinators
};

enum AdapterEvent { User(String), Order(String), Heartbeat }

#[tokio::main]
async fn main() -> anyhow::Result<()> {
let (tx_user, rx_user) = mpsc::channel::<String>(1024);
let (tx_order, rx_order) = mpsc::channel::<String>(1024);

// Example producers
tokio::spawn(async move { let _ = tx_user.send("u1".into()).await; });
tokio::spawn(async move { let _ = tx_order.send("o1".into()).await; });

let ticker = tokio::time::interval(Duration::from_secs(1));

let users = ReceiverStream::new(rx_user).map(AdapterEvent::User);
let orders = ReceiverStream::new(rx_order).map(AdapterEvent::Order);
let beats = IntervalStream::new(ticker).map(|_| AdapterEvent::Heartbeat);

// Compose: merge multiple sources and shape the flow
let mut events =
users.merge(orders)
.merge(beats)
.throttle(Duration::from_millis(20));

while let Some(ev) = events.next().await {
match ev {
AdapterEvent::User(v) => tracing::info!("user: {v}"),
AdapterEvent::Order(v) => tracing::info!("order: {v}"),
AdapterEvent::Heartbeat => tracing::debug!("tick"),
}
}

Ok(())
}

Pros

  • Concise fan-in and transforms (filter/map/batch/timeout).
  • Natural fit when returning impl Stream<Item = Event> to consumers.

Cons

  • Adds one dependency; slightly different ownership/lifetimes vs bare Receiver.

Side-by-side: when to use which

AspectChannel + tokio::select! (no tokio-stream)Stream-first (uses tokio-stream)What the dependency implies
Why it’s usedPull from channels via recv().await, coordinate with select!.Wrap Tokio primitives as Streams and/or use combinators.Presence of tokio-stream signals a stream-centric composition.
Primary abstractionFutures + channels + select!.Stream<Item = T> + wrappers + StreamExt.Stream API → extra crate.
Typical codewhile let Some(x) = rx.recv().await {}, select! { ... }ReceiverStream::new(rx).map(...).merge(...).next().awaitWrappers/combinators imply tokio-stream.
Fan-in / mergingManual select! arms; verbose for many/dynamic sources.merge, SelectAll, or StreamMap for succinct fan-in.tokio-stream buys tools for multiplexing.
Timers / heartbeatsinterval() polled in loops.IntervalStream + timeout/throttle/chunks_timeout.Time-aware ops rely on tokio-stream + features.
Public API shapePull: async fn next_event() -> Option<T>.Stream: fn into_stream(self) -> impl Stream<Item = T>.Exposing a stream often requires the crate.
ComposabilityHand-rolled transforms.One-liners with StreamExt (map/filter/batch).Enables declarative pipelines.
BackpressureChannel capacity governs it; explicit.Same channels underneath; wrappers don’t change capacity.Neutral; it’s about ergonomics.
Fairness/orderingselect! randomizes fairness per iteration.Per-stream order preserved; cross-stream order depends on combinator.Document semantics either way.
TestabilityManual harnesses around loops..take(n), .collect::<Vec<_>>(), etc.Stream APIs are often easier to test.
Cost / depsLean; no extra crate.Adds tokio-stream; thin adapter overhead.Main cost is dependency surface.

Design recipes (complete, paste-ready)

A) Channel-first everywhere (leanest; drop tokio-stream)

  • Keep a pull API like next_event().
  • Use tokio::time::timeout for per-item deadlines.
use std::time::Duration;
use tokio::{sync::mpsc, time::timeout};

pub async fn pump_with_timeout(mut rx: mpsc::Receiver<String>) -> anyhow::Result<()> {
loop {
match timeout(Duration::from_secs(5), rx.recv()).await {
Ok(Some(msg)) => tracing::info!("msg: {msg}"),
Ok(None) => break, // channel closed
Err(_) => tracing::warn!("no event within 5s"),
}
}
Ok(())
}

B) Offer both (feature-gated Stream API)

Cargo.toml

[features]
default = []
stream-api = ["tokio-stream"]

[dependencies]
tokio = { version = "1", features = ["rt-multi-thread","macros","sync","time","signal"] }
tokio-stream = { version = "0.1", optional = true }

Client

#[cfg(feature = "stream-api")]
use tokio_stream::wrappers::ReceiverStream;

pub struct Client {
rx_inbound: tokio::sync::mpsc::Receiver<MyEvent>,
}

impl Client {
pub async fn next_event(&mut self) -> Option<MyEvent> {
self.rx_inbound.recv().await
}

#[cfg(feature = "stream-api")]
pub fn into_stream(self) -> ReceiverStream<MyEvent> {
ReceiverStream::new(self.rx_inbound)
}
}

C) Stream-first everywhere (plus pull convenience)

  • Internally fan-out via broadcast so multiple consumers can subscribe.
use tokio::sync::{mpsc, broadcast};
use tokio_stream::wrappers::BroadcastStream;

pub struct Client {
rx_inbound: mpsc::Receiver<Event>, // pull path
bus: broadcast::Sender<Event>, // stream path
_reader: tokio::task::JoinHandle<()>,
}

impl Client {
pub async fn next_event(&mut self) -> Option<Event> {
self.rx_inbound.recv().await
}

pub fn event_stream(&self) -> BroadcastStream<Event> {
BroadcastStream::new(self.bus.subscribe())
}
}

D) Expose a Stream without tokio-stream

  • Implement Stream directly over mpsc::Receiver via poll_recv.
use futures_core::Stream;
use pin_project_lite::pin_project;
use std::{pin::Pin, task::{Context, Poll}};
use tokio::sync::mpsc;

pin_project! {
pub struct EventStream<T> {
#[pin]
rx: mpsc::Receiver<T>,
}
}

impl<T> EventStream<T> {
pub fn new(rx: mpsc::Receiver<T>) -> Self { Self { rx } }
}

impl<T> Stream for EventStream<T> {
type Item = T;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.project().rx.poll_recv(cx)
}
}

Performance, backpressure, ordering

  • Overhead: ReceiverStream is a thin adapter; hot-path costs are typically parsing/allocations, not the wrapper.
  • Backpressure: unchanged—governed by channel boundedness and consumer speed.
  • Ordering: per-stream order is preserved; merged streams don’t guarantee global order—timestamp if strict ordering matters.
  • Fairness: tokio::select! randomizes branch polling; stream fan-in fairness depends on the specific combinator (merge, SelectAll, StreamMap).

A quick decision checklist

  • Need to return impl Stream<Item = Event> or use stream combinators? → Use tokio-stream.
  • Only need a single event loop with recv().await and select!? → Tokio alone is fine.
  • Want both ergonomics and lean defaults? → Feature-gate a stream view (stream-api).

References (URLs)

Hyperliquid Gasless Trading – Deep Comparison, Fees, and 20 Optimized Strategies

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR Hyperliquid runs its own Layer-1 with two execution domains:

  • HyperCore — native on-chain central limit order book (CLOB), margin, funding, liquidations.
  • HyperEVM — standard EVM runtime (gas metered, paid in HYPE).

Trading on HyperCore is gasless: orders, cancels, TP/SL, TWAP, Scale ladders, etc. are signed actions included in consensus, not EVM transactions.

  • You don’t need HYPE to place/cancel orders.
  • You pay maker/taker fees and funding, not gas.
  • Spam is mitigated with address budgets, rate limits, open-order caps.
  • If you need more throughput: buy request weight at $0.0005 per action.

The design enables CEX-style strategies (dense ladders, queue dancing, rebates, hourly hedging) without the friction of gas.

Official GitHub repos:


1. How “gasless” works

Order lifecycle

Wallet signs payload  →  Exchange endpoint → Node → Validators (HyperBFT)
↘ deterministic inclusion into HyperCore state
  • Signatures, not transactions. Your wallet signs payloads (EIP-712 style). These are posted to the Exchange endpoint, gossiped to validators, ordered in consensus, and applied to HyperCore. → No gas, just signature.

  • Onboarding. Enable trading = sign once. Withdrawals = flat $1 fee, not a gas auction. Docs → Onboarding

  • Spam protection.

    • Address budgets: 10k starter buffer, then 1 action per 1 USDC lifetime fills.
    • Open-order cap: base 1,000 → scales to 5,000.
    • Congestion fairness: max 2× maker-share per block.
    • ReserveRequestWeight: buy capacity at $0.0005/action. Docs → Rate limits
  • Safety rails.

    • scheduleCancel (dead-man’s switch)
    • expiresAfter (time-box an action)
    • noop (nonce invalidation)
  • Order types. Market, Limit, ALO (post-only), IOC, GTC, TWAP, Scale, TP/SL (market or limit), OCO. Docs → Order types

  • Self-trade prevention. Expire-maker: cancels resting maker side instead of self-fill. Docs → STP


2. Fees: Hyperliquid vs DEXes & CEXes

Perps (base tiers)

VenueMakerTakerNotes
Hyperliquid0.015%0.045%Gasless actions; staking discounts up to 40%; rebates up to –0.003%
dYdX v40.01%0.05%Gasless submits/cancels; fills only
GMX v2 (perps)0.04–0.06%0.04–0.06%Round-trip 0.08–0.12% + funding/borrow + L2 gas
Binance Futures~0.018%~0.045%VIP/BNB discounts; USDC-M can hit 0% maker
Bybit Perps0.020%0.055%Tiered; VIP reductions
OKX Futures0.020%0.050%VIP can reach –0.005% / 0.015%
Kraken Futures0.020%0.050%Down to 0% / 0.01% at scale

Spot

VenueMakerTakerGas
Hyperliquid0.040%0.070%Gasless actions; $1 withdraw
Uniswap v30.01–1%0.01–1%User pays gas; or solver embeds in price
Bybit Spot0.15%0.10–0.20%CEX; no gas
OKX Spot0.08%0.10%VIP/OKB discounts

3. Funding models

  • Hyperliquid: 8h rate paid hourly (1/8 each hour). Hyperps use EMA mark (oracle-light).
  • dYdX v4: hourly funding; standard premium/interest.
  • GMX v2: continuous borrow vs pool imbalance.

4. What gasless enables (tactically)

  • Dense ladders + queue dancing: cheap to modify/cancel 1000s of levels.
  • Granular hedging: rebalance perps/spot hedges hourly without friction.
  • CEX-style STP + ALO: protect queue priority.
  • Deterministic inclusion: HyperBFT ensures one global order sequence.
  • Predictable scaling: buy request weight explicitly instead of gas auction.

5. Ten core strategies

  1. Passive Maker Ladder (ALO + STP) Build dense post-only ladders, earn spread + rebates, cancel/repost gas-free.

  2. Rebate Farming (maker-share) Hit ≥0.5%, 1.5%, 3% maker volume shares to unlock –0.001%/–0.002%/–0.003%.

  3. Funding-Arb / Cash-and-Carry Long spot vs short perp; rebalance hourly gas-free.

  4. TWAP Execution Use native 30s slice TWAP with slippage caps; gasless param tweaks.

  5. Scale Order Grids Deploy wide grids with up to 5k resting orders; adjust spacing by ATR.

  6. Latency-Aware MM Run node, use noop for stale nonces.

  7. OCO Risk-Boxing (TP/SL) Parent-linked stops/targets; frequent adjustment gasless.

  8. Hyperps Momentum/Fade Trade EMA-based hyperps; funding skew stabilizes. Turnkey repo

  9. Dead-Man’s Switch Hygiene Always use scheduleCancel; pair with expiresAfter.

  10. Throughput Budgeting Add logic to purchase reserveRequestWeight at spikes.


6. Ten advanced strategies

  1. Maker-Skewed Basis Harvest Hedge legs passively, collect rebates + funding.

  2. Adaptive Spread Ladder Contract/expand quotes with realized vol; keep order count fixed.

  3. Queue-Position Arbitrage Gasless modify to overtake by 1 tick; requires local queue estimation.

  4. Stale-Quote Punisher Flip passive→taker when off-chain anchors are stale.

  5. Rebate-Neutral Market Impact Hedger Pre-compute edge ≈ (S/2 − A − f_m); trade only when ≥0.

  6. Funding Skew Swing-Trader Switch between mean-revert & trend based on funding drift.

  7. Dead-Man Sessioner Each session starts with scheduleCancel(t) to avoid zombie orders.

  8. Liquidity Layer Splitter Spread ladders across accounts; use STP to avoid self-trades.

  9. Cross-Venue Micro-Arb HL vs CEX/DEX; taker on mispriced side, maker on the other.

  10. Event-Mode Capacity Burst Pre-buy request weight pre-CPI/FOMC; change ladder parameters.


7. Cost sanity check ($100k notional)

  • Hyperliquid: 0.015% maker ($15) + 0.045% taker ($45) = $60 (+ funding).
  • dYdX v4: 0.01% + 0.05% = $60.
  • GMX v2: 0.04–0.06% open + 0.04–0.06% close = $80–120 (+ borrow + gas).
  • Binance Futures: 0.018% + 0.045% ≈ $63 (base VIP).

8. Implementation gotchas

  • Budgets & caps: track in code; cancels have higher allowance; throttling needed.
  • Min sizes: perps $10 notional; spot 10 quote units.
  • ExpiresAfter: avoid triggering (5× budget cost).
  • Node ops: run Linux, open ports 4001/4002, colocate in Tokyo.
  • Nonces: prefer modify; use noop if stuck.

9. Comparison snapshot

  • Hyperliquid & dYdX v4 — gasless trading actions, on-chain CLOB, deterministic finality.
  • UniswapX / CoW — user-gasless via solver; solver pays gas, embeds in your price.
  • Uniswap v3/v4, GMX — user pays gas + pool fee; MEV & slippage dominate costs.
  • CEXes — no gas, lowest fees at VIP, fiat rails; but centralized custody.

10. GitHub Index


Bottom Line

Hyperliquid takes gas out of the trading loop, letting traders focus on fees, funding, latency, and inventory control. The result: a CEX-like experience with on-chain transparency.

Best use cases:

  • High-frequency maker strategies (queue-dancing, rebates).
  • Funding arbitrage with fine-grained rebalancing.
  • Event-driven hedging.
  • Developers who want to build bots in Python/Rust/TS/Go without juggling gas balances.

Slaying Bullish Bias - A Market Wizards Playbook

· 8 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

“The markets are never wrong; opinions often are.”
—Jesse Livermore (quoted by Bruce Kovner in Market Wizards)

2025 is a cognitive trap for equity bulls. The Ukraine front barely moves, President Trump’s blanket 10 % tariff rattles importers, and German GDP just printed –0.6 % QoQ—yet the S&P 500 hovers north of 5 500.
If that disconnect feels comfortable, your built-in bullish bias (the reflex that “prices should rise”) is probably steering the wheel.

Below you’ll find the fully annotated 30-question audit that the original Market Wizards might run if they sat at your terminal today. Each line now includes:

  • Wizard Insight – the lesson Schwager’s interviewees hammered home.
  • 2025 Angle – why the trap is live right now.
  • Real-World Example – an actual 2025 tape or trade vignette.

Paste the checklist into your trading journal, sprint through one block per week, and watch your P/L detach from hope-fuelled drift.


1 Self-Diagnosis & Mind-Set

#QuestionWizard Insight2025 AngleReal-World Example
1Do you scan for longs first?Mark Cook forced students to open a bearish filter before coffee.All major U.S. broker dashboards open on “Top Gainers.”11 Mar 2025: NVDA +6 % headlined your grid; bottom losers list showed LUMN –13 % (a better 2-R short you never saw).
25 % drop—curiosity or dip euphoria?Paul Tudor Jones cut leverage 50 % within minutes on 19 Oct 1987.15 Mar 2025: SPX –5.1 %, VIX 34 → index kept sliding another –2 % before basing.You felt “great entry” and bought QQQ, stopped out –1 R next day.
3Does shorting feel “un-American”?Tom Baldwin joked “The pits only cheer the upside.”Media framed every 2024 sell-off as “unpatriotic betting.”You posted a bearish tweet on Apple and got piled-on for “fighting innovation.”
4Dips = noise, rallies = trends?Ed Seykota logged only % risk and ATR multiples—no adjectives.CNBC still calls –2 % a “slump” but +2 % a “rally.”23 Apr 2025 journal: “just a blip lower” (SPX –1.8 %), “solid up-trend” (+1.6 %).
5Is self-worth tied to rising curves?Seykota kept family money in T-Bills.Real college costs +6 % YoY; equity drift no longer guarantees coverage.You increased size after your kid’s tuition invoice hit inbox.

2 Historical Perspective & Narrative Traps

#QuestionWizard Insight2025 AngleReal-World Example
6How did you fare in each mini-crash?Jones was green in ’87; Raschke flat in ’98.2022 bear (–27 %) still on broker statement.Your 2022 curve: –18 % vs CTA index +13 %.
7Tested your edge with drift = 0?Seykota’s systems worked on pork bellies—no drift.Forward SPX drift est. < 4 %.Your momentum back-test Sharpe fell from 1.2 ➜ 0.48.
8Rely on “Don’t bet against America”?Kovner warns empires rotate.Proposed 2 % buy-back tax in House bill HR-1735.Removing buy-backs in DCF knocked 7 turns off Apple PE.
9Ignoring survivorship in Wizard lore?Schwager himself says thousands blew up.TikTok “profit porn” hides losers.Your Telegram group shares only green screenshots.
10Studied markets that never bounced?Japanese believers held Nikkei bags for 34 yrs.Greek ASE –85 % from ’07 peak even now.Your Europe ETF overweight assumes 7 % CAGR.

3 Quantitative Evidence

#QuestionWizard Insight2025 AngleReal-World Example
11Shorts share of tickets & P/L?Cook: “Trade both sides or half your vision is gone.”Q1-25 had strongest 3-day down-impulse since Covid lows.9 shorts out of 112 trades; net P/L –2 R.
12Invert your long signal—result?Seykota’s “whipsaw song” works both ways.High-short-interest anomaly revived with expensive rates.Inverted signal on same universe scored Sharpe 0.32.
13Price vs log-return testing?Wizards think in % risk.Nasdaq 100 raw-point rise masks compounding.Strategy CAGR fell from 18 % ➜ 11 % in log mode.
14Stop symmetry?Raschke: 2 ATR both sides.Meme squeezes tempt 1 ATR shorts, 3 ATR longs.Last month: 6 short stop-outs at –1 ATR, 2 long at –3 ATR.
15Monte-Carlo μ = 0 survival?Jones funds vol desks to weather drift drought.Commodity volatility doubles path risk now.10 000 paths: median curve flatlines by month 22.

4 Risk & Capital Allocation

#QuestionWizard Insight2025 AngleReal-World Example
16Exposure cap symmetric?Seykota could flip net ±200 %.Short borrow fees sub-1 % for 80 % of S&P names.You allow +150 % long, –25 % short.
17Averaging down losers?Kovner: “Losers average losers.”AI chip names drop 18 % intraday regularly.Added twice to AMD at –3 % and –6 %; closed –2 R.
18Cover shorts first in vol spikes?Tudor held shorts through crash until vol bled.Post-VIX-34 drift negative for 12 sessions.Closed TSLA short on spike, kept long tech—lost 1.4 R.
19Put hedge value?Jones buys vol only when skew cheap.1-month ATM put cost 1.8 % in Mar 2025.Last year: spent 3.4 R in premium, saved 1.1 R in crashes.
20Squeezes breach worst-case loss?Baldwin sized by dollar vol.Feb 2025 GME +40 % gap.Short lost 2.3 R overnight.

5 Process & Decision Architecture

#QuestionWizard Insight2025 AngleReal-World Example
21UI bias toward gainers?Seykota coded neutral dash.Broker UIs show green first.Missed FSLY –12 % fail because list buried.
22Short checklist depth?Raschke rehearses shorts like longs.Easier borrows post-reg changes.Long checklist 12 items; short only 5.
23Narrative only for shorts?Wizards trust price.News calls every dip an “overreaction.”Skipped META short for lack of “fundamental story”; missed –8 %.
24Post-mortem balance?Cook logs every miss.Feb 2025: three perfect failed-break short signals unreviewed.Reviewed 7 missed longs, 0 shorts.
25Auto-flip after failed breakout?“Failed move = fast move” —Soros.AI names fake breakouts weekly.Long NVDA fake-out –1 R, no flip; price dropped another 4 %.

6 Psychology & Continuous Improvement

#QuestionWizard Insight2025 AngleReal-World Example
26Bias tags clustering on longs?Jones hired risk coach.AI tools auto-tag sentiment now.65 % optimism tags on long entries, 15 % on shorts.
27Real-time beta alerts?Tudor’s board lit red at β > 0.7.Slack webhooks trivial.Hit 0.78 beta on 9 Apr, noticed next day.
28Gap-down rehearsal?Basso ran crash sims monthly.Turkey ETF gap –12 % overnight, Feb 2025.Panicked exit + slippage –1 R; never rehearsed scenario.
29Forced-flat longs feeling?Seykota welcomes dry powder.Broker outage flushed longs 14 Jan.Felt panic → identity fusion with bull thesis.
30Preparing for lower drift?Wizards add new edges.Demographics & reshoring compress margins.Equity CAGR model still at 8 %.

7 Wrap-Up

Bullish bias survives because it pays most of the time—until it erases years of gains in a single macro grenade.
The Market Wizards neutralised the bias through symmetry: equal screens, stops, reviews, and above all, equal respect for up and down tape.

Run this playbook once per quarter:

  1. Audit each question honestly.
  2. Patch the weakest habit or policy.
  3. Re-test your edge in a zero-drift simulation.

Do that, and the next tariff volley, energy spike, or AI bubble unwind becomes just another tradeable regime—not a career-ending ambush.

Happy (bias-free) trading!

Contributing a Safer MarketIfTouchedOrder to Nautilus Trader — Hardening Conditional Orders in Rust

· 3 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR – PR #2577 introduces a fallible constructor, complete domain-level checks, and four focussed tests for MarketIfTouchedOrder, thereby closing long-standing Issue #2529 on order-validation consistency.


1 Background

MarketIfTouchedOrder (MIT) is effectively the reverse of a stop-market order: it lies dormant until price touches a trigger, then fires as an immediate market order.
Because a latent trigger feeds straight into an instant fill path, robust validation is non-negotiable—any silent mismatch becomes a live trade.


2 Why the Change Was Necessary

ProblemImpact
Partial positivity checks on quantity, trigger_price, display_qtyInvalid values propagated deep into matching engines before exploding
TimeInForce::Gtd accepted expire_time = NoneProgrammer thought they had “good-til-date”; engine treated it as GTC
No check that display_qty ≤ quantityIceberg slice could exceed total size, leaking full inventory
Legacy new API only panickedCall-site couldn’t surface errors cleanly

Issue #2529 demanded uniform, fail-fast checks across all order types; MIT was first in line.


3 What PR #2577 Delivers

AreaBefore (v0)After (v1)
Constructornew → panic on errornew_checkedanyhow::Result<Self>; new now wraps it
Positivity checksPartialGuaranteed for quantity, trigger_price, (optional) display_qty
GTD ordersexpire_time optionalRequired when TIF == GTD
Iceberg ruleNonedisplay_qty ≤ quantity
Error channelOpaque panicsPrecise anyhow::Error variants
Tests04 rstest cases (happy-path + 3 failure modes)

Diff stats: +159 / −13 – one file crates/model/src/orders/market_if_touched.rs.


4 File Walkthrough Highlights

  1. new_checked – all domain guards live here; returns Result.
  2. Guard helpers – re-uses check_positive_quantity, check_positive_price, check_predicate_false.
  3. Legacy compatibilitynew() simply calls Self::new_checked(...).expect(FAILED).
  4. apply() tweak – slippage is recomputed immediately after a fill event.
  5. Testsok, quantity_zero, gtd_without_expire, display_qty_gt_quantity.

6 Order-Lifecycle Diagram


7 Using the New API

let mit = MarketIfTouchedOrder::new_checked(
trader_id,
strategy_id,
instrument_id,
client_order_id,
OrderSide::Sell,
qty,
trigger_price,
TriggerType::LastPrice,
TimeInForce::Gtc,
None, // expire_time
false, false, // reduce_only, quote_quantity
None, None, // display_qty, emulation_trigger
None, None, // trigger_instrument_id, contingency_type
None, None, // order_list_id, linked_order_ids
None, // parent_order_id
None, None, // exec_algorithm_id, params
None, // exec_spawn_id
None, // tags
init_id,
ts_init,
)?;

Prefer new_checked in production code; if you stick with new, you’ll still get clearer panic messages.


8 Impact & Next Steps

  • Fail-fast safety – all invariants enforced before the order leaves your code.
  • Granular error reporting – propagate Result outward instead of catching panics.
  • Zero breaking changes – legacy code continues to compile.

Action items: migrate to new_checked, bubble the Result, and sleep better during live trading.


9 References

TypeLink
Pull Request #2577https://github.com/nautechsystems/nautilus_trader/pull/2577
Issue #2529https://github.com/nautechsystems/nautilus_trader/issues/2529

Happy (and safer) trading!