Skip to main content

2 posts tagged with "solana"

View All Tags

DeFiTuna: On-Chain Limit Orders on Solana

· 15 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

TL;DR

This guide demonstrates real-world implementation of DeFiTuna limit orders on Solana mainnet, focusing on:

  • Direct RPC Interaction: Building and submitting transactions without high-level SDKs
  • On-Chain Limit Order Storage: How limit order parameters are encoded in account data
  • Position Lifecycle: Open position → Set limits → Close position with actual mainnet transactions
  • Account Derivation: PDA calculations for spot positions and associated token accounts
  • DeFiTuna SDK Patterns: Insights from the official TypeScript/Rust SDK implementation

Live Mainnet Transactions:

Introduction

DeFiTuna combines Orca Whirlpools with on-chain limit orders, enabling automated trading triggers without requiring active monitoring. Unlike traditional AMMs where you passively provide liquidity, DeFiTuna positions execute predefined trades when price conditions are met.

This article walks through the complete implementation—from deriving PDAs to encoding instruction data—using Rust with solana-sdk and insights from the DeFiTuna SDK.

Architecture Overview

DeFiTuna + Orca Integration

Key Insight: DeFiTuna is a Protocol Layer

DeFiTuna doesn't implement its own AMM. Instead, it:

  1. Wraps existing AMMs (Orca Whirlpools, Fusion pools)
  2. Adds limit order logic on top
  3. Manages leveraged positions and lending vaults

On-Chain Account Structure

Position Account Data Layout (346 bytes)

From our mainnet position 8wvKhHXHfzY4eQZTyK4kTfUtGj46XX5UX8P4S5kBbJ5:

Offset   Field                    Type     Bytes   Description
------ ----- ---- ----- -----------
0-8 Discriminator u64 8 Account type identifier
8-40 Authority Pubkey 32 Position owner
40-72 Pool Pubkey 32 Orca Whirlpool address
72-73 Position Token u8 1 0=TokenA, 1=TokenB
73-74 Collateral Token u8 1 0=TokenA, 1=TokenB
92-100 Amount u64 8 Position size
100-108 Borrowed u64 8 Borrowed amount
184-200 Lower Limit √Price u128 16 Buy trigger price
200-216 Upper Limit √Price u128 16 Sell trigger price

Hex dump from mainnet:

00b8:   60 e4 c0 d6 1c 8e 68 01  00 00 00 00 00 00 00 00   Lower limit
00c8: 70 17 34 50 e5 c9 7c 01 00 00 00 00 00 00 00 00 Upper limit

These bytes encode:

  • Lower: 6651068162312125808640 → $130
  • Upper: 7024310870365581606912 → $145

Implementation: Rust Binaries

Project Structure

bots/defituna-bot/
├── Cargo.toml
├── .env # Configuration
├── defituna.json # Program IDL
└── src/
├── config.rs # Shared config
└── bin/
├── open_spot_position.rs # Create position
├── set_limit_orders.rs # Configure triggers
├── check_position.rs # Query state
└── close_position.rs # Cleanup

1. Opening a Spot Position

File: src/bin/open_spot_position.rs

use solana_sdk::{
instruction::{AccountMeta, Instruction},
signature::{Keypair, Signer},
transaction::Transaction,
};
use spl_associated_token_account::get_associated_token_address;

// DeFiTuna program ID (mainnet)
const DEFITUNA_PROGRAM: &str = "tuna4uSQZncNeeiAMKbstuxA9CUkHH6HmC64wgmnogD";

// Orca SOL/USDC Whirlpool
const WHIRLPOOL: &str = "Czfq3xZZDmsdGdUyrNLtRhGc47cXcZtLG4crryfu44zE";

fn main() -> Result<()> {
let program_id = Pubkey::from_str(DEFITUNA_PROGRAM)?;
let whirlpool = Pubkey::from_str(WHIRLPOOL)?;
let authority = executor_keypair.pubkey();

// Derive spot position PDA
let (tuna_spot_position, _bump) = Pubkey::find_program_address(
&[
b"tuna_spot_position",
authority.as_ref(),
whirlpool.as_ref(),
],
&program_id,
);

// Token program IDs
let token_program_a = spl_token::ID; // SOL uses standard program
let token_program_b = spl_token::ID; // USDC uses standard program

// Derive associated token accounts for position
let tuna_position_ata_a = get_associated_token_address(
&tuna_spot_position,
&SOL_MINT
);
let tuna_position_ata_b = get_associated_token_address(
&tuna_spot_position,
&USDC_MINT
);

// Build instruction
// Discriminator from IDL: [87, 208, 173, 48, 231, 62, 210, 220]
let mut data = vec![87, 208, 173, 48, 231, 62, 210, 220];

// Args: position_token (PoolToken::A=0), collateral_token (PoolToken::B=1)
data.push(0); // Trading SOL (position_token = A)
data.push(1); // Collateral is USDC (collateral_token = B)

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new(authority, true), // authority (signer, writable)
AccountMeta::new_readonly(SOL_MINT, false), // mint_a
AccountMeta::new_readonly(USDC_MINT, false), // mint_b
AccountMeta::new_readonly(token_program_a, false), // token_program_a
AccountMeta::new_readonly(token_program_b, false), // token_program_b
AccountMeta::new(tuna_spot_position, false), // tuna_position (writable)
AccountMeta::new(tuna_position_ata_a, false), // tuna_position_ata_a
AccountMeta::new(tuna_position_ata_b, false), // tuna_position_ata_b
AccountMeta::new_readonly(whirlpool, false), // pool (Orca Whirlpool)
AccountMeta::new_readonly(system_program::ID, false), // system_program
AccountMeta::new_readonly(
spl_associated_token_account::ID,
false
), // associated_token_program
],
data,
};

// Create and sign transaction
let recent_blockhash = rpc_client.get_latest_blockhash()?;
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&authority),
&[&executor_keypair],
recent_blockhash,
);

// Send to RPC
let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Position created: {}", signature);

Ok(())
}

Key Points:

  • No collateral required to open position (just creates account structure)
  • Position PDA seeds: ["tuna_spot_position", authority, whirlpool]
  • Instruction creates position account + 2 associated token accounts
  • Cost: ~0.00329904 SOL rent + ~0.000005 SOL gas

2. Setting Limit Orders

File: src/bin/set_limit_orders.rs

/// Convert price to sqrt_price format
/// Formula: sqrt_price = sqrt(price) * 2^64, adjusted for decimals
fn price_to_sqrt_price(price: f64, decimals_a: u8, decimals_b: u8) -> u128 {
let decimal_diff = decimals_a as i32 - decimals_b as i32;
let adjusted_price = if decimal_diff >= 0 {
price * 10_f64.powi(decimal_diff)
} else {
price / 10_f64.powi(-decimal_diff)
};

let sqrt_price_f64 = adjusted_price.sqrt() * (1u128 << 64) as f64;
sqrt_price_f64 as u128
}

fn main() -> Result<()> {
// Derive same position PDA
let (tuna_spot_position, _) = Pubkey::find_program_address(
&[b"tuna_spot_position", authority.as_ref(), whirlpool.as_ref()],
&program_id,
);

// Set limit prices
let lower_price = 130.0; // Buy if SOL drops to $130
let upper_price = 145.0; // Sell if SOL rises to $145

// Convert to sqrt_price (SOL=9 decimals, USDC=6 decimals)
let lower_sqrt_price = price_to_sqrt_price(lower_price, 9, 6);
let upper_sqrt_price = price_to_sqrt_price(upper_price, 9, 6);

// Build instruction
// Discriminator: [10, 180, 19, 205, 169, 133, 52, 118]
let mut data = vec![10, 180, 19, 205, 169, 133, 52, 118];

// Args: lower_limit_order_sqrt_price (u128), upper_limit_order_sqrt_price (u128)
data.extend_from_slice(&lower_sqrt_price.to_le_bytes());
data.extend_from_slice(&upper_sqrt_price.to_le_bytes());

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new_readonly(authority, true), // authority (signer)
AccountMeta::new(tuna_spot_position, false), // tuna_position (writable)
],
data,
};

let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&authority),
&[&executor_keypair],
recent_blockhash,
);

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Limit orders set: {}", signature);

Ok(())
}

Output (mainnet):

Lower (buy): $130 → sqrt_price: 6651068162312125808640
Upper (sell): $145 → sqrt_price: 7024310870365581606912

Verification on-chain:

solana account 8wvKhHXHfzY4eQZTyK4kTfUtGj46XX5UX8P4S5kBbJ5

# Bytes 184-200 (lower limit):
60 e4 c0 d6 1c 8e 68 01 00 00 00 00 00 00 00 00

# Bytes 200-216 (upper limit):
70 17 34 50 e5 c9 7c 01 00 00 00 00 00 00 00 00

3. Closing a Position

File: src/bin/close_position.rs

fn main() -> Result<()> {
// Derive position PDA (same as open)
let (tuna_spot_position, _) = Pubkey::find_program_address(
&[b"tuna_spot_position", authority.as_ref(), whirlpool.as_ref()],
&program_id,
);

// Build close instruction
// Discriminator: [4, 189, 171, 84, 110, 220, 10, 8]
let data = vec![4, 189, 171, 84, 110, 220, 10, 8];

let instruction = Instruction {
program_id,
accounts: vec![
AccountMeta::new(authority, true), // authority
AccountMeta::new_readonly(SOL_MINT, false), // mint_a
AccountMeta::new_readonly(USDC_MINT, false), // mint_b
AccountMeta::new_readonly(spl_token::ID, false), // token_program_a
AccountMeta::new_readonly(spl_token::ID, false), // token_program_b
AccountMeta::new(tuna_spot_position, false), // tuna_position
AccountMeta::new(tuna_position_ata_a, false), // tuna_position_ata_a
AccountMeta::new(tuna_position_ata_b, false), // tuna_position_ata_b
],
data,
};

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
println!("Position closed, rent recovered: {}", signature);

Ok(())
}

Result: Rent (0.00329904 SOL) returned to wallet, position account deleted.

DeFiTuna SDK Patterns

TypeScript SDK Structure

From DefiTuna/tuna-sdk repository:

// ts-sdk/client/src/txbuilder/openTunaSpotPosition.ts
export async function openTunaSpotPositionInstructions(
rpc: Rpc<GetAccountInfoApi & GetMultipleAccountsApi>,
authority: TransactionSigner,
poolAddress: Address,
args: OpenTunaSpotPositionInstructionDataArgs,
): Promise<IInstruction[]> {
// 1. Derive position PDA
const tunaPositionAddress = (
await getTunaSpotPositionAddress(authority.address, poolAddress)
)[0];

// 2. Get associated token accounts
const tunaPositionAtaA = (
await findAssociatedTokenPda({
owner: tunaPositionAddress,
mint: mintA.address,
tokenProgram: mintA.programAddress,
})
)[0];

const tunaPositionAtaB = (
await findAssociatedTokenPda({
owner: tunaPositionAddress,
mint: mintB.address,
tokenProgram: mintB.programAddress,
})
)[0];

// 3. Build instruction
return getOpenTunaSpotPositionInstruction({
authority,
mintA: mintA.address,
mintB: mintB.address,
tokenProgramA: mintA.programAddress,
tokenProgramB: mintB.programAddress,
tunaPosition: tunaPositionAddress,
tunaPositionAtaA,
tunaPositionAtaB,
pool: poolAddress,
associatedTokenProgram: ASSOCIATED_TOKEN_PROGRAM_ADDRESS,
...args,
});
}

Rust SDK Patterns

From rust-sdk/client/src/txbuilder/open_tuna_spot_position.rs:

pub fn open_tuna_spot_position_instructions(
rpc: &RpcClient,
authority: &Pubkey,
pool_address: &Pubkey,
args: OpenTunaSpotPositionInstructionArgs,
) -> Result<Vec<Instruction>> {
// 1. Fetch pool account to determine token mints
let pool_account = rpc.get_account(pool_address)?;

// Decode based on program owner
let (mint_a_address, mint_b_address) = if pool_account.owner == FUSIONAMM_ID {
let pool: FusionPool = decode_account(&pool_account)?;
(pool.token_mint_a, pool.token_mint_b)
} else if pool_account.owner == WHIRLPOOL_ID {
let pool: Whirlpool = decode_account(&pool_account)?;
(pool.token_mint_a, pool.token_mint_b)
} else {
return Err(anyhow!("Unsupported pool type"));
};

// 2. Get mint accounts to determine token programs
let mint_accounts = rpc.get_multiple_accounts(&[
mint_a_address.into(),
mint_b_address.into()
])?;

let mint_a_account = mint_accounts[0].as_ref()
.ok_or(anyhow!("Token A mint not found"))?;
let mint_b_account = mint_accounts[1].as_ref()
.ok_or(anyhow!("Token B mint not found"))?;

// 3. Build instruction
Ok(vec![open_tuna_spot_position_instruction(
authority,
pool_address,
&mint_a_address,
&mint_b_address,
&mint_a_account.owner, // Token program
&mint_b_account.owner,
args,
)])
}

Key SDK Features:

  1. Pool Type Detection: Automatically handles Orca vs Fusion pools
  2. Token Program Discovery: Supports both SPL Token and Token-2022
  3. Account Pre-Validation: Checks accounts exist before building transaction
  4. Error Handling: Detailed error messages for debugging

Solana RPC Interaction Patterns

1. Account Fetching with Commitment

use solana_client::rpc_client::RpcClient;
use solana_sdk::commitment_config::CommitmentConfig;

let rpc_client = RpcClient::new_with_commitment(
"https://api.mainnet-beta.solana.com",
CommitmentConfig::confirmed()
);

// Fetch account with specific commitment
let account = rpc_client.get_account_with_commitment(
&position_address,
CommitmentConfig::finalized()
)?;

if let Some(account) = account.value {
println!("Account exists: {} bytes", account.data.len());
}

2. Transaction Simulation Before Sending

// Build transaction
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&payer.pubkey()),
&[&payer],
recent_blockhash,
);

// Simulate first
match rpc_client.simulate_transaction(&transaction) {
Ok(result) => {
if let Some(err) = result.value.err {
println!("Simulation failed: {:?}", err);
if let Some(logs) = result.value.logs {
for log in logs {
println!(" {}", log);
}
}
return Err(anyhow!("Simulation error"));
}
println!("✅ Simulation successful");
}
Err(e) => {
println!("RPC error during simulation: {}", e);
return Err(e.into());
}
}

// Now send for real
let signature = rpc_client.send_and_confirm_transaction(&transaction)?;

3. Parsing Account Data

fn parse_position_limits(account_data: &[u8]) -> Result<(u128, u128)> {
if account_data.len() < 216 {
return Err(anyhow!("Account data too short"));
}

// Extract limit order sqrt prices
let lower_bytes: [u8; 16] = account_data[184..200]
.try_into()
.unwrap();
let upper_bytes: [u8; 16] = account_data[200..216]
.try_into()
.unwrap();

let lower_sqrt_price = u128::from_le_bytes(lower_bytes);
let upper_sqrt_price = u128::from_le_bytes(upper_bytes);

Ok((lower_sqrt_price, upper_sqrt_price))
}

// Usage
let account = rpc_client.get_account(&position_pda)?;
let (lower, upper) = parse_position_limits(&account.data)?;

// Convert to human-readable prices
let lower_price = sqrt_price_to_price(lower, 9, 6);
let upper_price = sqrt_price_to_price(upper, 9, 6);

println!("Buy limit: ${:.2}", lower_price);
println!("Sell limit: ${:.2}", upper_price);

4. Watching for Transaction Confirmation

use solana_sdk::signature::Signature;
use std::time::Duration;

fn wait_for_confirmation(
rpc_client: &RpcClient,
signature: &Signature,
max_retries: u32,
) -> Result<()> {
for i in 0..max_retries {
std::thread::sleep(Duration::from_secs(2));

if let Ok(status) = rpc_client.get_signature_status(signature) {
if let Some(result) = status {
if let Err(e) = result {
return Err(anyhow!("Transaction failed: {:?}", e));
}
println!("✅ Transaction confirmed in slot");
return Ok(());
}
}

if i == max_retries - 1 {
return Err(anyhow!("Transaction timeout"));
}
}

Ok(())
}

Advanced: Modify Position with Collateral

The modify_tuna_spot_position_orca instruction adds collateral and activates trading:

// From IDL
pub struct ModifyTunaSpotPositionOrca {
pub decrease_percent: u32, // 0 for increase
pub collateral_amount: u64, // USDC to deposit
pub borrow_amount: u64, // SOL to borrow from vault
pub required_swap_amount: u64, // 0 for auto-calc
pub remaining_accounts_info: RemainingAccountsInfo,
}

// Required accounts (24 total)
accounts: vec![
AccountMeta::new(authority, true),
AccountMeta::new_readonly(tuna_config, false),
AccountMeta::new_readonly(mint_a, false),
AccountMeta::new_readonly(mint_b, false),
AccountMeta::new_readonly(token_program_a, false),
AccountMeta::new_readonly(token_program_b, false),
AccountMeta::new(market, false),
AccountMeta::new(vault_a, false),
AccountMeta::new(vault_b, false),
AccountMeta::new(vault_a_ata, false),
AccountMeta::new(vault_b_ata, false),
AccountMeta::new(tuna_spot_position, false),
AccountMeta::new(tuna_position_ata_a, false),
AccountMeta::new(tuna_position_ata_b, false),
AccountMeta::new(authority_ata_a, false),
AccountMeta::new(authority_ata_b, false),
AccountMeta::new(fee_recipient_ata_a, false),
AccountMeta::new(fee_recipient_ata_b, false),
AccountMeta::new_readonly(pyth_oracle_price_feed_a, false),
AccountMeta::new_readonly(pyth_oracle_price_feed_b, false),
AccountMeta::new_readonly(whirlpool_program, false),
AccountMeta::new(whirlpool, false),
AccountMeta::new_readonly(memo_program, false),
AccountMeta::new_readonly(system_program::ID, false),
]

Why so many accounts?

  • DeFiTuna needs to interact with lending vaults
  • Price oracles (Pyth) for health checks
  • Fee collection accounts
  • Orca Whirlpool state for actual swaps
  • Multiple token accounts for each token in the pair

Price Calculation Mathematics

Square Root Price Encoding

DeFiTuna stores prices as P264\sqrt{P} \cdot 2^{64} (same as Uniswap V3):

sqrt_price=P26410(decimalsadecimalsb)\text{sqrt\_price} = \sqrt{P} \cdot 2^{64} \cdot 10^{(decimals_a - decimals_b)}

Example: SOL/USDC at $130

  • SOL decimals: 9
  • USDC decimals: 6
  • Adjustment: 10(96)=100010^{(9-6)} = 1000
sqrt_price=1301000264\text{sqrt\_price} = \sqrt{130 \cdot 1000} \cdot 2^{64} =13000018446744073709551616= \sqrt{130000} \cdot 18446744073709551616 =360.55518446744073709551616= 360.555 \cdot 18446744073709551616 =6651068162312125808640= 6651068162312125808640

Converting Back to Price

fn sqrt_price_to_price(sqrt_price: u128, decimals_a: u8, decimals_b: u8) -> f64 {
let decimal_diff = decimals_a as i32 - decimals_b as i32;
let price_raw = (sqrt_price as f64 / (1u128 << 64) as f64).powi(2);

if decimal_diff >= 0 {
price_raw / 10_f64.powi(decimal_diff)
} else {
price_raw * 10_f64.powi(-decimal_diff)
}
}

// Verify on-chain data
let lower_sqrt_price = 6651068162312125808640u128;
let price = sqrt_price_to_price(lower_sqrt_price, 9, 6);
assert_eq!(price, 130.0);

Testing on Mainnet vs Devnet

Devnet Limitations

# DeFiTuna pools don't exist on devnet
$ solana account 9m96e4CieVMjTC7vP1a1pM3qfn5A5kHRPs3SrsVZBGqt --url devnet
Error: AccountNotFound

# Orca Whirlpools also limited on devnet

Recommendation: Test on mainnet with minimal amounts (0.01-0.1 SOL).

Mainnet Testing Strategy

  1. Fund wallet: 0.1 SOL (~$13.65 at current prices)
  2. Gas budget: ~0.000005 SOL per transaction
  3. Rent: ~0.00329904 SOL (recoverable on close)
  4. Test sequence:
    • Open position: ~$0.0007 gas
    • Set limits: ~$0.0007 gas
    • Close position: ~$0.0007 gas + recover rent

Total cost: ~$0.002 for full testing cycle

Production Considerations

1. Error Handling

#[derive(Debug)]
pub enum DefiTunaError {
#[error("Position does not exist")]
PositionNotFound,

#[error("Invalid tick range")]
InvalidTickRange,

#[error("Insufficient collateral")]
InsufficientCollateral,

#[error("RPC error: {0}")]
RpcError(#[from] solana_client::client_error::ClientError),
}

// Usage
match rpc_client.get_account(&position_pda) {
Ok(_) => { /* process */ },
Err(_) => return Err(DefiTunaError::PositionNotFound),
}

2. Rate Limiting

use std::time::{Duration, Instant};

struct RateLimiter {
last_request: Instant,
min_interval: Duration,
}

impl RateLimiter {
pub fn new(requests_per_second: u32) -> Self {
Self {
last_request: Instant::now(),
min_interval: Duration::from_millis(1000 / requests_per_second as u64),
}
}

pub fn wait(&mut self) {
let elapsed = self.last_request.elapsed();
if elapsed < self.min_interval {
std::thread::sleep(self.min_interval - elapsed);
}
self.last_request = Instant::now();
}
}

// Usage with public RPC (limit to 10 req/s)
let mut limiter = RateLimiter::new(10);

for position in positions {
limiter.wait();
let account = rpc_client.get_account(&position)?;
// process...
}

3. Transaction Retry Logic

const MAX_RETRIES: u32 = 3;

fn send_with_retry(
rpc_client: &RpcClient,
transaction: &Transaction,
) -> Result<Signature> {
let mut last_error = None;

for attempt in 1..=MAX_RETRIES {
match rpc_client.send_and_confirm_transaction(transaction) {
Ok(signature) => return Ok(signature),
Err(e) => {
println!("Attempt {}/{} failed: {}", attempt, MAX_RETRIES, e);
last_error = Some(e);

if attempt < MAX_RETRIES {
std::thread::sleep(Duration::from_secs(2_u64.pow(attempt)));
}
}
}
}

Err(last_error.unwrap().into())
}

Complete Working Example

Here's a full program that opens a position, sets limits, and closes:

use anyhow::Result;
use solana_client::rpc_client::RpcClient;
use solana_sdk::{
instruction::{AccountMeta, Instruction},
pubkey::Pubkey,
signature::{Keypair, Signer},
transaction::Transaction,
};
use std::str::FromStr;

const DEFITUNA_PROGRAM: &str = "tuna4uSQZncNeeiAMKbstuxA9CUkHH6HmC64wgmnogD";
const WHIRLPOOL: &str = "Czfq3xZZDmsdGdUyrNLtRhGc47cXcZtLG4crryfu44zE";
const SOL_MINT: &str = "So11111111111111111111111111111111111111112";
const USDC_MINT: &str = "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v";

fn main() -> Result<()> {
// Initialize
let rpc_url = "https://api.mainnet-beta.solana.com";
let rpc_client = RpcClient::new(rpc_url);

let executor_keypair = read_keypair_from_env()?;
let program_id = Pubkey::from_str(DEFITUNA_PROGRAM)?;
let whirlpool = Pubkey::from_str(WHIRLPOOL)?;

// Derive position PDA
let (position_pda, _) = Pubkey::find_program_address(
&[
b"tuna_spot_position",
executor_keypair.pubkey().as_ref(),
whirlpool.as_ref(),
],
&program_id,
);

println!("Position PDA: {}", position_pda);

// Step 1: Open position
let open_ix = build_open_position_instruction(
&program_id,
&executor_keypair.pubkey(),
&whirlpool,
&position_pda,
)?;

let sig1 = send_instruction(&rpc_client, &executor_keypair, open_ix)?;
println!("✅ Position opened: {}", sig1);

// Step 2: Set limit orders
let limits_ix = build_set_limits_instruction(
&program_id,
&executor_keypair.pubkey(),
&position_pda,
130.0, // buy at $130
145.0, // sell at $145
)?;

let sig2 = send_instruction(&rpc_client, &executor_keypair, limits_ix)?;
println!("✅ Limits set: {}", sig2);

// Step 3: Verify on-chain
std::thread::sleep(std::time::Duration::from_secs(5));
let account = rpc_client.get_account(&position_pda)?;
let (lower, upper) = parse_limits(&account.data)?;
println!("📊 On-chain limits: ${:.2} / ${:.2}", lower, upper);

// Step 4: Close position
let close_ix = build_close_position_instruction(
&program_id,
&executor_keypair.pubkey(),
&position_pda,
)?;

let sig3 = send_instruction(&rpc_client, &executor_keypair, close_ix)?;
println!("✅ Position closed: {}", sig3);

Ok(())
}

fn send_instruction(
rpc_client: &RpcClient,
payer: &Keypair,
instruction: Instruction,
) -> Result<String> {
let recent_blockhash = rpc_client.get_latest_blockhash()?;
let transaction = Transaction::new_signed_with_payer(
&[instruction],
Some(&payer.pubkey()),
&[payer],
recent_blockhash,
);

let signature = rpc_client.send_and_confirm_transaction(&transaction)?;
Ok(signature.to_string())
}

// Helper functions omitted for brevity
// See full implementation in GitHub repository

Deployment Checklist

Before deploying to production:

  • Test on mainnet with small amounts first
  • Implement comprehensive error handling
  • Add transaction retry logic
  • Set up monitoring and alerts
  • Use paid RPC endpoint (Helius, Triton, QuickNode)
  • Implement rate limiting
  • Add logging for all RPC calls
  • Store transaction signatures for audit
  • Test limit order execution in both directions
  • Verify position health calculations
  • Plan for emergency position closure

Conclusion

Building on DeFiTuna requires understanding:

  1. PDA Derivation: Positions are deterministic based on authority + pool
  2. Instruction Encoding: Discriminators + args in little-endian format
  3. RPC Patterns: Simulation, confirmation, retry logic
  4. On-Chain Data: Reading and parsing account bytes
  5. SDK Integration: When to use SDK vs raw transactions

The mainnet transactions in this guide prove that limit orders are truly stored on-chain and executable without active monitoring. Once set, the DeFiTuna protocol monitors prices and executes trades automatically.

Key Insight: DeFiTuna is a protocol abstraction layer, not a standalone AMM. It wraps existing liquidity sources (Orca, Fusion) with advanced order types, making it a powerful tool for automated trading strategies on Solana.

Resources

Deploying Real-Time Solana Data Streams on Cloudflare Containers with LaserStream

· 13 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

LaserStream deployed on Cloudflare Containers

TL;DR: Deploy a production-ready real-time Solana slot streaming service using Helius LaserStream SDK on Cloudflare Containers.

  • Ultra-low latency: Real-time slot updates via gRPC
  • Global edge deployment: Cloudflare's global network
  • Auto-scaling: Container lifecycle managed by Durable Objects
  • Production-ready: Health checks, error handling, and observability

Why LaserStream on Cloudflare?

Helius LaserStream provides ultra-low latency access to Solana data via gRPC streaming. Traditional WebSocket polling introduces delays; LaserStream eliminates this with direct gRPC connections to Helius nodes.

Why Cloudflare Containers?

Traditional deployments require server provisioning, load balancing, and scaling. Cloudflare Containers solve this:

FeatureTraditional VPSCloudflare Containers
Global deploymentManual multi-region setupAutomatic edge deployment
ScalingManual or autoscaling groupsAuto-scaling via Durable Objects
Cold startAlways running (cost)Sleep after inactivity
gRPC supportYesYes (in Containers, not Workers)
SSL/TLSManual cert managementAutomatic
DDoS protectionAdditional serviceBuilt-in

Architecture overview

Key components:

  1. Cloudflare Worker (TypeScript/Hono): HTTP API layer, routing, health checks
  2. Durable Object: Singleton manager for container lifecycle
  3. Rust Container (Axum): gRPC client for LaserStream, HTTP server for API
  4. Helius LaserStream: Real-time Solana data via gRPC

Project structure

laserstream-container/
├── src/
│ └── index.ts # Worker (Hono API + Durable Object routing)
├── container_src/
│ ├── Cargo.toml # Rust dependencies
│ └── src/
│ ├── main.rs # Axum HTTP server
│ └── stream.rs # LaserStream gRPC client
├── Dockerfile # Multi-stage Rust build
├── wrangler.jsonc # Cloudflare configuration
├── package.json # Build and deployment scripts
└── tsconfig.json # TypeScript configuration

Prerequisites

Before deploying, ensure you have:

  • Cloudflare account with Workers enabled
  • Helius API key (get one here - free tier available)
  • Docker Desktop (for building container images)
  • Node.js 20+ and pnpm
  • Wrangler CLI (npm install -g wrangler)

Authenticate with Cloudflare

wrangler login

This opens a browser to authorize Wrangler with your Cloudflare account.


Step 1: Worker implementation

The Worker provides the HTTP API layer and routes requests to the container.

Install dependencies

pnpm install @cloudflare/containers hono
pnpm install -D typescript wrangler

Worker code (src/index.ts)

import { Container } from "@cloudflare/containers";
import { Hono } from "hono";

// Define the container class
export class LaserStreamContainer extends Container<Env> {
defaultPort = 8080;
sleepAfter = "2m"; // Sleep after 2 minutes of inactivity

envVars = {
HELIUS_API_KEY: "", // Set via wrangler secret
LASERSTREAM_ENDPOINT: "https://laserstream-devnet-ewr.helius-rpc.com",
RUST_LOG: "info",
};

override onStart() {
console.log("LaserStream container started");
}

override onStop() {
console.log("LaserStream container stopped");
}

override onError(error: unknown) {
console.error("LaserStream container error:", error);
}
}

// Create Hono app
const app = new Hono<{ Bindings: Env }>();

// Service information endpoint
app.get("/", (c) => {
return c.text(
"LaserStream on Cloudflare Containers\n\n" +
"Endpoints:\n" +
"GET /health - Health check\n" +
"POST /start - Start LaserStream subscription\n" +
"GET /latest - Get latest slot update\n"
);
});

// Worker health check
app.get("/health", (c) => {
return c.json({
status: "ok",
timestamp: new Date().toISOString()
});
});

// Proxy all other requests to the singleton container
app.all("*", async (c) => {
try {
// Get singleton container by name
const containerId = c.env.LASERSTREAM_CONTAINER.idFromName("laserstream-main");
const container = c.env.LASERSTREAM_CONTAINER.get(containerId);

// Forward request to container
return await container.fetch(c.req.raw);
} catch (error) {
console.error("Container error:", error);
return c.json(
{ error: "Container unavailable", details: String(error) },
500
);
}
});

export default app;

Key concepts

  • Durable Object singleton: idFromName("laserstream-main") ensures only one container instance handles all requests
  • Sleep after inactivity: Container sleeps after 2 minutes, saving costs
  • Error handling: Graceful fallback if container is unavailable
  • Environment variables: Container receives config via envVars

Step 2: Rust container implementation

The Rust container runs the LaserStream gRPC client and exposes an HTTP API.

Container dependencies (container_src/Cargo.toml)

[package]
name = "laserstream_container"
version = "0.1.0"
edition = "2021"

[dependencies]
anyhow = "1.0"
axum = "0.7"
chrono = { version = "0.4", features = ["serde"] }
futures-util = "0.3"
helius-laserstream = "0.1.5"
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.49", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

HTTP server (container_src/src/main.rs)

use std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
};

use axum::{
extract::State,
http::StatusCode,
response::IntoResponse,
routing::{get, post},
Json, Router,
};
use serde::Serialize;
use tokio::sync::RwLock;
use tracing::{error, info};
use tracing_subscriber::EnvFilter;

mod stream;

#[derive(Clone)]
struct AppState {
started: Arc<AtomicBool>,
latest: Arc<RwLock<Option<LatestSlot>>>,
}

#[derive(Debug, Clone, Serialize)]
struct LatestSlot {
slot: u64,
parent: Option<u64>,
status: String,
created_at_rfc3339: Option<String>,
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Initialize logging
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.init();

let port: u16 = std::env::var("PORT")
.unwrap_or_else(|_| "8080".to_string())
.parse()?;

let state = AppState {
started: Arc::new(AtomicBool::new(false)),
latest: Arc::new(RwLock::new(None)),
};

// Start stream on boot
ensure_stream_started(state.clone()).await;

// Define routes
let app = Router::new()
.route("/health", get(health))
.route("/start", post(start))
.route("/latest", get(latest))
.with_state(state);

let addr = SocketAddr::from(([0, 0, 0, 0], port));
info!("listening on {}", addr);

let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app).await?;

Ok(())
}

async fn health() -> impl IntoResponse {
(StatusCode::OK, "ok\n")
}

async fn start(State(state): State<AppState>) -> impl IntoResponse {
ensure_stream_started(state).await;
(StatusCode::OK, "started\n")
}

async fn latest(State(state): State<AppState>) -> impl IntoResponse {
let guard = state.latest.read().await;
match guard.as_ref() {
Some(slot) => (StatusCode::OK, Json(slot.clone())),
None => (
StatusCode::NOT_FOUND,
Json(LatestSlot {
slot: 0,
parent: None,
status: "no data".to_string(),
created_at_rfc3339: None,
}),
),
}
}

async fn ensure_stream_started(state: AppState) {
if !state.started.swap(true, Ordering::SeqCst) {
tokio::spawn(async move {
if let Err(e) = stream::run_slot_stream(state.clone()).await {
error!("Stream error: {}", e);
state.started.store(false, Ordering::SeqCst);
}
});
}
}

LaserStream client (container_src/src/stream.rs)

use anyhow::{anyhow, Context};
use chrono::{DateTime, Utc};
use futures_util::StreamExt;
use tokio::pin;
use tracing::{info, warn};

use crate::{AppState, LatestSlot};
use helius_laserstream::{
config::LaserstreamConfig,
grpc::{subscribe_update::UpdateOneof, SubscribeRequest},
client::subscribe,
};

pub async fn run_slot_stream(state: AppState) -> anyhow::Result<()> {
let endpoint = std::env::var("LASERSTREAM_ENDPOINT")
.context("LASERSTREAM_ENDPOINT is required")?;
let api_key = std::env::var("HELIUS_API_KEY")
.context("HELIUS_API_KEY is required")?;

info!("connecting to LaserStream at {}", endpoint);

let config = LaserstreamConfig {
endpoint,
x_token: Some(api_key),
};

let request = SubscribeRequest {
slots: HashMap::new(),
accounts: HashMap::new(),
transactions: HashMap::new(),
blocks: HashMap::new(),
blocks_meta: HashMap::new(),
entry: HashMap::new(),
commitment: None,
accounts_data_slice: vec![],
ping: None,
};

let mut stream = subscribe(config, request).await?;
pin!(stream);

info!("LaserStream connected, waiting for updates...");

while let Some(msg) = stream.next().await {
match msg {
Ok(update) => {
if let Some(UpdateOneof::Slot(slot_update)) = update.update_oneof {
let created_at = slot_update
.created_at
.and_then(|ts| {
DateTime::from_timestamp(ts.seconds, ts.nanos as u32)
})
.map(|dt: DateTime<Utc>| dt.to_rfc3339());

let latest = LatestSlot {
slot: slot_update.slot,
parent: Some(slot_update.parent),
status: format!("{:?}", slot_update.status),
created_at_rfc3339: created_at,
};

*state.latest.write().await = Some(latest.clone());
info!("slot update: {:?}", latest);
}
}
Err(e) => {
warn!("stream error: {}", e);
return Err(anyhow!("stream error: {}", e));
}
}
}

Ok(())
}

Step 3: Dockerfile

Build the Rust container with a multi-stage Dockerfile for minimal image size.

# syntax=docker/dockerfile:1

FROM rust:1.83-slim AS build

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
protobuf-compiler \
build-essential \
g++ \
&& rm -rf /var/lib/apt/lists/*

# Copy Rust source
COPY container_src/Cargo.toml ./
COPY container_src/src ./src

# Build release binary
RUN cargo build --release

# Runtime image
FROM debian:bookworm-slim
RUN apt-get update && \
apt-get install -y ca-certificates libssl3 && \
rm -rf /var/lib/apt/lists/*

COPY --from=build /app/target/release/laserstream_container /laserstream_container
EXPOSE 8080

CMD ["/laserstream_container"]

Build optimizations

  • Multi-stage build: Build stage uses full Rust toolchain, runtime uses minimal Debian
  • Dependency caching: Cargo dependencies cached in Docker layers
  • Release build: Optimized binary with --release
  • Minimal runtime: Only ca-certificates and libssl3 in final image

Step 4: Wrangler configuration

Configure the Worker and Container deployment.

wrangler.jsonc

{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "laserstream-container",
"main": "src/index.ts",
"compatibility_date": "2025-01-08",
"compatibility_flags": ["nodejs_compat"],

"observability": {
"enabled": true
},

"containers": [
{
"class_name": "LaserStreamContainer",
"image": "registry.cloudflare.com/<ACCOUNT_ID>/laserstream-container-rust:v1.0.0",
"max_instances": 10
}
],

"durable_objects": {
"bindings": [
{
"class_name": "LaserStreamContainer",
"name": "LASERSTREAM_CONTAINER"
}
]
},

"migrations": [
{
"new_sqlite_classes": ["LaserStreamContainer"],
"tag": "v1"
}
],

"vars": {
"LASERSTREAM_ENDPOINT": "https://laserstream-devnet-ewr.helius-rpc.com"
}
}

Replace <ACCOUNT_ID> with your Cloudflare account ID (find it in the Cloudflare dashboard).

Configuration explained

  • compatibility_date: API version for Workers runtime
  • containers: Container image registry path and scaling settings
  • durable_objects: Singleton container manager binding
  • migrations: Database schema for Durable Objects
  • vars: Environment variables (non-sensitive)

Step 5: Build and deploy

Build scripts (package.json)

{
"name": "laserstream-container",
"version": "1.0.0",
"scripts": {
"build": "tsc && cargo build --release --manifest-path=container_src/Cargo.toml",
"build:container": "wrangler containers build . --tag laserstream-container-rust:latest",
"push:container": "wrangler containers push laserstream-container-rust:latest",
"deploy": "wrangler deploy",
"dev": "wrangler dev",
"tail": "wrangler tail",
"secret:set": "wrangler secret put"
},
"dependencies": {
"@cloudflare/containers": "^0.0.21",
"hono": "4.11.1"
},
"devDependencies": {
"@types/node": "^25.0.3",
"typescript": "5.9.3",
"wrangler": "4.58.0"
}
}

Build the container image

# Build Rust container locally
pnpm run build:container

# Tag with version
docker tag laserstream-container-rust:latest laserstream-container-rust:v1.0.0

# Push to Cloudflare registry
pnpm run push:container

Expected output:

Building container image...
Successfully tagged laserstream-container-rust:latest
Pushing to registry.cloudflare.com/...
Image pushed successfully
Digest: sha256:59c03a69b057...

Set secrets

Before deploying, set the Helius API key:

echo "YOUR_HELIUS_API_KEY" | pnpm run secret:set HELIUS_API_KEY

Alternatively, use interactive mode:

pnpm run secret:set HELIUS_API_KEY
# Paste your API key when prompted

Deploy to Cloudflare

pnpm run deploy

Expected output:

Uploading Worker...
Published laserstream-container (0.42 sec)
https://laserstream-container.<your-subdomain>.workers.dev

Step 6: Testing the deployment

Health check

curl https://laserstream-container.<your-subdomain>.workers.dev/health

Expected response:

{
"status": "ok",
"timestamp": "2025-01-09T12:34:56.789Z"
}

Start LaserStream

curl -X POST https://laserstream-container.<your-subdomain>.workers.dev/start

Expected response:

started

Get latest slot update

curl https://laserstream-container.<your-subdomain>.workers.dev/latest

Expected response:

{
"slot": 285432167,
"parent": 285432166,
"status": "Confirmed",
"created_at_rfc3339": "2025-01-09T12:35:01.234Z"
}

Monitoring and debugging

View live logs

pnpm run tail

Expected output:

2025-01-09T12:34:56.789Z INFO laserstream_container: listening on 0.0.0.0:8080
2025-01-09T12:35:01.234Z INFO laserstream_container: connecting to LaserStream at https://laserstream-devnet-ewr.helius-rpc.com
2025-01-09T12:35:02.456Z INFO laserstream_container: LaserStream connected, waiting for updates...
2025-01-09T12:35:03.678Z INFO laserstream_container: slot update: LatestSlot { slot: 285432167, ... }

Common issues

"Missing or invalid API key"

Cause: HELIUS_API_KEY secret not set or incorrect.

Fix:

# Verify secret is set
wrangler secret list

# Re-set if missing
echo "YOUR_KEY" | pnpm run secret:set HELIUS_API_KEY

# Redeploy
pnpm run deploy

Container not starting

Cause: Docker image not pushed or incorrect registry path.

Fix:

# Verify image exists
docker images | grep laserstream

# Rebuild and push
pnpm run build:container
pnpm run push:container
pnpm run deploy

"Container unavailable" errors

Cause: Container sleeping or crashed.

Fix:

# Check logs
pnpm run tail

# Restart container
curl -X POST https://<your-url>/start

Production considerations

Scaling and costs

  • Cold starts: First request after sleep takes ~2-5 seconds to spin up container
  • Warm instances: Subsequent requests are instant while container is active
  • Sleep after: Configure sleepAfter based on request frequency
  • Max instances: Set max_instances based on expected load

Cost optimization

export class LaserStreamContainer extends Container<Env> {
sleepAfter = "5m"; // Sleep after 5 minutes for dev
// sleepAfter = "30m"; // Sleep after 30 minutes for production
}

Error handling

Add retry logic and circuit breakers:

// In stream.rs
pub async fn run_slot_stream(state: AppState) -> anyhow::Result<()> {
let mut retry_count = 0;
const MAX_RETRIES: u32 = 5;

loop {
match try_connect(&state).await {
Ok(_) => {
retry_count = 0; // Reset on success
}
Err(e) => {
retry_count += 1;
if retry_count >= MAX_RETRIES {
return Err(anyhow!("Max retries exceeded: {}", e));
}
let backoff = std::time::Duration::from_secs(2_u64.pow(retry_count));
warn!("Retry {} after {:?}: {}", retry_count, backoff, e);
tokio::time::sleep(backoff).await;
}
}
}
}

Multi-region deployment

For global low-latency access, use Cloudflare's automatic edge deployment:

{
"placement": { "mode": "smart" }
}

This automatically routes requests to the nearest Cloudflare edge location.

Security

  1. API key rotation: Regularly rotate HELIUS_API_KEY
  2. Rate limiting: Add rate limiting in Worker
  3. Authentication: Add bearer tokens for production
// In Worker
app.use("*", async (c, next) => {
const token = c.req.header("Authorization");
if (!token || token !== `Bearer ${c.env.API_SECRET}`) {
return c.json({ error: "Unauthorized" }, 401);
}
await next();
});

Integration examples

Polling from a trading bot

// jupiter-laserstream-bot/src/poller.ts
import { setInterval } from "timers/promises";

const CONTAINER_URL = "https://laserstream-container.<subdomain>.workers.dev";

async function pollLatestSlot() {
const response = await fetch(`${CONTAINER_URL}/latest`);
const data = await response.json();

console.log(`Latest slot: ${data.slot}`);

// Trigger trading logic
await handleSlotUpdate(data);
}

// Poll every 2 seconds
for await (const _ of setInterval(2000)) {
await pollLatestSlot();
}

WebSocket broadcasting

Convert HTTP polling to WebSocket for browser clients:

// websocket-bridge.ts
import { WebSocketServer } from "ws";

const wss = new WebSocketServer({ port: 8080 });
const CONTAINER_URL = "https://laserstream-container.<subdomain>.workers.dev";

wss.on("connection", (ws) => {
const interval = setInterval(async () => {
const response = await fetch(`${CONTAINER_URL}/latest`);
const data = await response.json();
ws.send(JSON.stringify(data));
}, 1000);

ws.on("close", () => clearInterval(interval));
});

Comparison with alternatives

ApproachLatencyCostComplexityScalability
WebSocket polling~500msLowLowManual
Traditional VPS~100msMediumHighManual
LaserStream + Cloudflare~50msLow (pay-per-use)MediumAutomatic
Direct gRPC~30msMediumHighManual

When to use this approach

Use Cloudflare Containers when:

  • You need global low-latency access
  • You want automatic scaling
  • You prefer pay-per-use pricing
  • You need DDoS protection

Use traditional VPS when:

  • You need full control over infrastructure
  • You have consistent high traffic (24/7)
  • You need specialized networking configurations

Conclusion

Deploying LaserStream on Cloudflare Containers provides a production-ready solution for real-time Solana data streaming with:

  • Global edge deployment: Automatic routing to nearest edge location
  • Auto-scaling: Container lifecycle managed by Durable Objects
  • Cost efficiency: Pay only for active container time
  • Developer experience: Simple deployment with Wrangler CLI

The combination of Helius LaserStream's ultra-low latency gRPC streaming and Cloudflare's global network creates a powerful platform for building real-time Solana applications.

Next steps

  • Add caching: Cache slot updates in Durable Object storage
  • Add metrics: Integrate with Cloudflare Analytics
  • Add filtering: Filter specific accounts or programs
  • Add historical replay: Use LaserStream's historical slot replay (up to 3000 slots)

Resources