1. RAG Pipelines
Fit 30-70% more retrieved documents in your context window with ISON's compact format.
Document Retrieval with Metadata
A typical RAG system retrieves documents with scores and metadata. ISON's tabular format is perfect for this:
{
"documents": [
{
"id": "doc_1",
"content": "ISON is optimized for LLMs",
"score": 0.92,
"source": "documentation",
"date": "2025-01-15"
},
{
"id": "doc_2",
"content": "RAG systems benefit from compact formats",
"score": 0.88,
"source": "blog",
"date": "2025-01-10"
}
]
}
table.documents
id content score source date
doc_1 "ISON is optimized for LLMs" 0.92 documentation 2025-01-15
doc_2 "RAG systems benefit from compact..." 0.88 blog 2025-01-10
Python Implementation
import ison_parser
from vector_db import search_similar
def rag_query(question: str, top_k: int = 10):
# Retrieve similar documents
results = search_similar(question, top_k=top_k)
# Format as ISON - 30-70% more efficient than JSON!
docs_data = {
"documents": [
{
"id": r.id,
"content": r.content,
"score": r.score,
"source": r.metadata.get("source"),
"date": r.metadata.get("date")
}
for r in results
]
}
ison_context = ison_parser.dumps(
ison_parser.from_dict(docs_data)
)
# Now you can fit 10 documents instead of 5 in the same context!
return query_llm(question, ison_context)
Multi-Source RAG
Combining documents from multiple sources with references:
# Vector search results
table.search_results
id content score
r1 "ISON reduces token usage by 30-70%" 0.95
r2 "Tabular formats are LLM-friendly" 0.88
r3 "References enable graph structures" 0.85
# Document sources
table.sources
resultId type url author
r1 documentation /docs/efficiency "Team ISON"
r2 blog /blog/2025-01-15 Alice
r3 tutorial /learn/references Bob
# Related topics
table.topics
resultId topic
r1 optimization
r1 performance
r2 llm-design
r3 data-structures
2. Agentic AI & Context Injection
Understanding context injection is key to effective LLM applications. Every piece of data you inject - tool results, state, retrieved documents - competes for the same limited context window.
Why Context Representation Matters
Modern AI applications constantly inject structured data into LLM context:
- MCP Tool Results - Database queries, API responses, file contents
- RAG Retrieval - Document chunks with metadata and relevance scores
- Agent State - Task progress, accumulated context, pending actions
- User Context - Profiles, preferences, permissions, history
Consider what JSON adds to each injection: {"key": "value"} requires quotes around the key, quotes around string values, colons, commas, and braces. For tabular data, this overhead repeats for every row and column. ISON eliminates this syntactic redundancy while remaining human-readable.
MCP Server Response Example
A typical MCP tool returning database results:
{
"tool": "query_users",
"result": {
"rows": [
{"id": 1, "name": "Alice", "role": "admin", "status": "active"},
{"id": 2, "name": "Bob", "role": "developer", "status": "active"},
{"id": 3, "name": "Carol", "role": "analyst", "status": "inactive"}
],
"metadata": {
"total": 3,
"query_time_ms": 15,
"cached": false
}
}
}
meta.query
tool total queryTime cached
query_users 3 15 false
table.users
id name role status
1 Alice admin active
2 Bob developer active
3 Carol analyst inactive
Agentic Loop State Injection
Agents accumulate state across iterations. ISON keeps it compact:
# Agent execution state - injected into each iteration
object.agent
id step totalSteps status objective
agent_001 4 8 running "Analyze Q4 sales data"
table.completed
step action result tokensUsed
1 query_database "Retrieved 1,247 records" 3200
2 summarize "Created 5 key insights" 2100
3 draft_report "Generated 800-word draft" 4500
4 review "Identified 3 issues" 1800
table.pending
step action dependencies
5 revise ":4"
6 add_charts ":5"
7 finalize ":5,:6"
8 deliver ":7"
# Accumulated context - efficiently stored
table.insights
id category finding
i1 revenue "Q4 up 23% YoY"
i2 products "Widget Pro leads sales"
i3 regions "APAC fastest growing"
Implementation with Claude
import anthropic
import ison_parser
async def agent_iteration(state: dict, task: str):
# Convert state to ISON - 70% smaller context injection
ison_state = ison_parser.dumps(ison_parser.from_dict(state))
client = anthropic.Anthropic()
response = await client.messages.create(
model="claude-sonnet-4-5-20250929",
system=f"""You are an AI agent. Current state:
{ison_state}
Analyze and proceed with the next step.""",
messages=[{"role": "user", "content": task}]
)
# With ISON, you have 3x more room for agent reasoning
return parse_agent_response(response)
3. Multi-Agent Systems
Clean, efficient protocol for agent communication and state synchronization.
Agent State Sharing
Multiple agents need to share and synchronize state efficiently:
# Agent registry
table.agents
id name role status lastActive
agent_1 ResearchAgent researcher active 2025-01-20T15:30:00
agent_2 WriterAgent writer idle 2025-01-20T15:25:00
agent_3 ReviewerAgent reviewer busy 2025-01-20T15:31:00
# Current tasks
table.tasks
id assignedTo priority status
task_101 :agent_1 high in_progress
task_102 :agent_2 medium pending
task_103 :agent_3 high in_progress
# Inter-agent messages
table.messages
from to content timestamp
:agent_1 :agent_3 "Research complete, ready for review" 2025-01-20T15:30:00
:agent_3 :agent_1 "Approved" 2025-01-20T15:31:00
Implementation
import ison_parser
class AgentCoordinator:
def __init__(self):
self.state = {}
def broadcast_state(self):
"""Share state with all agents in ISON format"""
state_doc = ison_parser.from_dict(self.state)
ison_state = ison_parser.dumps(state_doc)
# Broadcast to all agents
for agent in self.agents:
agent.receive_state(ison_state)
def agent_update(self, agent_id, update_data):
"""Receive state update from agent"""
update_doc = ison_parser.from_dict({
"update": {
"agent_id": agent_id,
"timestamp": datetime.now().isoformat(),
**update_data
}
})
ison_update = ison_parser.dumps(update_doc)
self.process_update(ison_update)
Workflow Orchestration
# Workflow definition
object.workflow
id name version
wf_001 "Content Pipeline" 1.2
# Workflow steps
table.steps
id workflowId sequence agent action dependencies
s1 wf_001 1 :agent_1 research null
s2 wf_001 2 :agent_2 draft :s1
s3 wf_001 3 :agent_3 review :s2
s4 wf_001 4 :agent_2 revise :s3
# Execution log
table.executions
stepId status startTime endTime output
s1 completed 2025-01-20T10:00:00 2025-01-20T10:15:00 "Research findings..."
s2 in_progress 2025-01-20T10:16:00 null null
s3 pending null null null
4. Database Query Results
Perfect format for relational, vector, and graph database outputs.
SQL Query Results
ISON naturally represents tabular SQL results:
# Query: SELECT users.*, orders.total FROM users JOIN orders
table.user_orders
user_id user_name user_email order_id order_total order_date
1 Alice alice@example.com 1001 125.50 2025-01-15
1 Alice alice@example.com 1002 89.99 2025-01-18
2 Bob bob@example.com 1003 234.00 2025-01-17
# Normalized version with references
table.users
id name email
1 Alice alice@example.com
2 Bob bob@example.com
table.orders
id userId total date
1001 :1 125.50 2025-01-15
1002 :1 89.99 2025-01-18
1003 :2 234.00 2025-01-17
Graph Database Results
Express graph structures with references:
# Nodes: People
table.people
id name role
p1 Alice Engineer
p2 Bob Manager
p3 Carol Designer
# Edges: Relationships
table.relationships
from to type since
:p1 :p2 reports_to 2024-01-01
:p3 :p2 reports_to 2024-06-01
:p1 :p3 collaborates_with 2024-08-15
# Projects
table.projects
id name lead members
proj_1 "Website Redesign" :p3 ":p1,:p2,:p3"
proj_2 "API Development" :p1 ":p1,:p2"
Vector Database Results
# Vector similarity search results
table.similar_items
id content embedding_preview distance
item_42 "Machine learning tutorial" "[0.12, 0.84, ...]" 0.15
item_89 "Deep learning guide" "[0.15, 0.82, ...]" 0.18
item_156 "Neural network basics" "[0.11, 0.85, ...]" 0.22
# Item metadata
table.metadata
itemId category author published tags
item_42 tutorial Alice 2024-12-01 "ml,tutorial,beginner"
item_89 guide Bob 2024-11-15 "dl,guide,intermediate"
item_156 article Carol 2024-10-20 "nn,article,beginner"
5. LLM System Prompts
Pack 2× more context into system prompts with domain knowledge and configurations.
User Context & Permissions
# User profile
object.user
id name role subscription tier
u_12345 "Alice Johnson" analyst premium 2
# User permissions
table.permissions
resource action allowed
documents read true
documents write true
admin_panel access false
reports create true
reports delete false
# User preferences
object.preferences
theme language timezone dateFormat
dark en-US "America/New_York" "YYYY-MM-DD"
# Recent activity
table.recent_activity
action resource timestamp
viewed doc_456 2025-01-20T14:30:00
created report_789 2025-01-20T13:15:00
updated doc_123 2025-01-20T12:00:00
Domain Knowledge Base
# Product catalog
table.products
id name category price inStock
p1 "Pro Subscription" software 29.99 true
p2 "Enterprise Plan" software 99.99 true
p3 "API Credits" service 0.01 true
# Business rules
table.rules
id condition action
r1 "subscription=pro" "enable_api_access"
r2 "credits<100" "send_low_balance_alert"
r3 "usage>threshold" "upgrade_suggestion"
# Company policies
table.policies
category policy
support "Response within 24 hours"
refund "30-day money back guarantee"
data "GDPR compliant, data encrypted"
6. Tool & Function Results
Structured outputs from API calls, searches, and code execution.
API Response
# Weather API result
object.weather
city temperature humidity condition windSpeed lastUpdated
"San Francisco" 18.5 65 Cloudy 12.3 2025-01-20T15:00:00
# 7-day forecast
table.forecast
date high low condition precipitation
2025-01-21 20 12 Sunny 0
2025-01-22 19 13 "Partly Cloudy" 10
2025-01-23 17 11 Rainy 80
2025-01-24 18 12 Cloudy 20
Search Results
# Web search results
meta.search
query resultsCount timeTaken
"ISON format" 1247 0.23
table.results
rank title url snippet relevance
1 "ISON Documentation" ison.dev/docs "Complete guide to ISON..." 0.95
2 "ISON vs JSON" blog.ai/ison-json "Comparing data formats..." 0.88
3 "ISON Parser" github.com/ison "Python implementation..." 0.82
Code Execution Results
# Test execution results
object.test_run
total passed failed skipped duration
45 42 2 1 12.5
# Failed tests
table.failures
test_name error_message file line
test_authentication "Expected 200, got 401" auth_test.py 45
test_validation "Missing required field" validator_test.py 123
# Performance metrics
table.slow_tests
test_name duration threshold status
test_database_query 5.2 3.0 warning
test_api_integration 2.8 3.0 ok
7. Complete Integration Examples
Customer Support AI Agent
import ison_parser
import openai
def handle_support_query(user_id: str, query: str):
# Fetch user context from database
user_data = db.get_user_profile(user_id)
tickets = db.get_user_tickets(user_id)
products = db.get_user_products(user_id)
# Convert to ISON (30-70% fewer tokens!)
context = {
"user": user_data,
"recent_tickets": tickets[-5:],
"active_products": products
}
ison_context = ison_parser.dumps(
ison_parser.from_dict(context)
)
# Query LLM with compact context
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": f"""You are a customer support agent.
User Context:
{ison_context}
Provide helpful, personalized support based on the user's history and products."""
},
{
"role": "user",
"content": query
}
]
)
return response.choices[0].message.content
E-commerce Recommendation System
import ison_parser
def get_recommendations(user_id: str):
# Fetch various data sources
user_profile = db.get_user(user_id)
purchase_history = db.get_purchases(user_id)
viewed_products = db.get_viewed(user_id)
similar_users = db.get_similar_users(user_id)
trending = db.get_trending_products()
# Combine into ISON document
context_data = {
"user": user_profile,
"purchases": purchase_history[-10:],
"viewed": viewed_products[-20:],
"similar_users": similar_users[:5],
"trending": trending[:10]
}
ison_context = ison_parser.dumps(
ison_parser.from_dict(context_data)
)
# With ISON, we can include 2× more context!
recommendations = llm.generate_recommendations(ison_context)
return recommendations
8. Performance Comparison
Real-World Token Savings
| Use Case | JSON Tokens | ISON Tokens | Savings |
|---|---|---|---|
| RAG Results (5 docs) | 2,100 | 1,050 | 50% |
| Database Query (50 rows) | 3,800 | 1,900 | 50% |
| User Profile + History | 1,200 | 650 | 46% |
| Multi-Agent State | 2,500 | 1,300 | 48% |
| Product Catalog (100 items) | 5,200 | 2,600 | 50% |
Understanding the Efficiency Gains
Why this matters: When you can include more relevant context in a request, LLMs have more information to work with. This often leads to more accurate, better-grounded responses - especially in RAG and multi-step agent workflows.
Ready to Try ISON?
ISON is open source and easy to integrate. Start with the playground to see the difference, or jump straight into the code.
pip install ison-py