Overview
ISON provides drop-in integrations for popular LLM frameworks, enabling token-efficient data exchange without changing your existing workflow.
LangChain
OutputParser for ISON format responses in chains and agents.
pip install langchain
LlamaIndex
Reader for ISON documents in RAG pipelines.
pip install llama-index
MCP Server
Model Context Protocol server exposing ISON tools for AI assistants.
pip install mcp
MCP Client
Client for consuming ISON data from MCP servers.
pip install mcp
OpenAI
Function calling integration for structured ISON outputs.
pip install openai
Anthropic
Tool use integration for Claude with ISON format.
pip install anthropic
LangChain Integration
Parse LLM responses in ISON format for token-efficient structured outputs.
ISONOutputParser
from langchain.llms import OpenAI
from ison_parser.integrations import ISONOutputParser
parser = ISONOutputParser()
llm = OpenAI()
prompt = f"""List 3 users with id, name, and email.
{parser.get_format_instructions()}"""
response = llm.predict(prompt)
doc = parser.parse(response)
# Access parsed data
users = doc['users']
for row in users.rows:
print(f"{row['name']}: {row['email']}")
ISONanticOutputParser (Typed Models)
from isonantic import TableModel, Field
from ison_parser.integrations import ISONanticOutputParser
class Product(TableModel):
__ison_block__ = "table.products"
id: int = Field(primary_key=True)
name: str = Field(min_length=1)
price: float = Field(ge=0)
parser = ISONanticOutputParser(model=Product)
products = parser.parse(response) # Returns List[Product]
for product in products:
print(f"{product.name}: ${product.price}")
Additional Parsers
ISONListOutputParser- Returns list of dictionariesISONDictOutputParser- Returns single dictionary
LlamaIndex Integration
Load ISON documents for RAG pipelines with optional ISONantic validation.
ISONReader
from llama_index.core import VectorStoreIndex
from ison_parser.integrations import ISONReader
# Load from file
reader = ISONReader()
documents = reader.load_data(file="data.ison")
# Create index
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What users are active?")
With ISONantic Validation
from isonantic import TableModel, Field
class User(TableModel):
__ison_block__ = "table.users"
id: int
name: str
active: bool
reader = ISONReader(model=User)
documents = reader.load_data("users.ison") # Validates on load
Additional Classes
ISONNodeParser- Chunk ISON documents by row, block, or documentISONRAGHelper- Optimize ISON for RAG context windows
MCP Server/Client
Model Context Protocol integration for AI assistants like Claude.
MCP Server
Expose ISON tools via MCP for AI assistants:
# Run as standalone server
python -m ison_parser.integrations.mcp_server
# Programmatic usage
from ison_parser.integrations import ISONMCPServer
server = ISONMCPServer(
name="ison-server",
data_dir="/path/to/ison/files"
)
await server.run()
Available Tools
| Tool | Description |
|---|---|
parse_ison | Parse ISON text to JSON |
format_ison | Convert JSON to ISON |
validate_ison | Validate ISON syntax |
query_ison | Query ISON documents |
convert_to_isonl | Convert to streaming format |
estimate_token_savings | Compare JSON vs ISON tokens |
MCP Client
from ison_parser.integrations import ISONMCPClient
async with ISONMCPClient() as client:
# Parse ISON
result = await client.parse_ison(ison_text)
print(result.data)
# Convert JSON to ISON
result = await client.format_ison({"users": [{"name": "Alice"}]})
print(result.data)
# Query ISON document
result = await client.query_ison(
ison_text,
block_name="users",
filter_field="active",
filter_value="true"
)
Synchronous client also available:
from ison_parser.integrations.mcp_client import ISONMCPClientSync
client = ISONMCPClientSync(use_local_fallback=True)
result = client.parse_ison(ison_text)
OpenAI Function Calling
Use ISON format with OpenAI's function calling API.
OpenAIISONTools
from openai import OpenAI
from ison_parser.integrations import OpenAIISONTools
client = OpenAI()
tools = OpenAIISONTools()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": tools.get_format_instruction()},
{"role": "user", "content": "List 5 products with id, name, price"}
],
tools=tools.get_tool_definitions(),
tool_choice={"type": "function", "function": {"name": "return_ison_table"}}
)
doc = tools.parse_response(response)
products = doc['products'].rows
With ISONantic Models
from isonantic import TableModel, Field
class User(TableModel):
__ison_block__ = "table.users"
id: int
name: str
tools = OpenAIISONTools(model=User)
users = tools.parse_response_typed(response) # Returns List[User]
OpenAIISONChat (High-Level)
from ison_parser.integrations.openai_integration import OpenAIISONChat
chat = OpenAIISONChat(model="gpt-4")
# Query as table
users = chat.query_as_table(
"List 5 sample users",
table_name="users",
fields=["id", "name", "email"]
)
# Query as object
config = chat.query_as_object(
"Generate app configuration",
object_name="config"
)
Anthropic Tool Use
Use ISON format with Claude's tool use API.
AnthropicISONTools
from anthropic import Anthropic
from ison_parser.integrations import AnthropicISONTools
client = Anthropic()
tools = AnthropicISONTools()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=tools.get_system_prompt(),
messages=[{"role": "user", "content": "List 5 products"}],
tools=tools.get_tool_definitions(),
tool_choice={"type": "tool", "name": "return_ison_table"}
)
doc = tools.parse_response(response)
products = doc['products'].rows
With ISONantic Models
from isonantic import TableModel, Field
class User(TableModel):
__ison_block__ = "table.users"
id: int
name: str
tools = AnthropicISONTools(model=User)
users = tools.parse_response_typed(response) # Returns List[User]
AnthropicISONChat (High-Level)
from ison_parser.integrations.anthropic_integration import AnthropicISONChat
chat = AnthropicISONChat(model="claude-sonnet-4-20250514")
# Query as table
users = chat.query_as_table(
"List 5 sample users",
table_name="users",
fields=["id", "name", "email", "active"]
)
# Analyze ISON data
analysis = chat.analyze_ison(
ison_text,
"What patterns do you see in the user data?"
)
ISONL Streaming Support
All integrations support ISONL (ISON Lines) - the streaming format for fine-tuning datasets, event logs, and real-time data.
LangChain - ISONL OutputParser
from ison_parser.integrations import ISONLOutputParser
from langchain_openai import ChatOpenAI
# Parse streaming ISONL responses
parser = ISONLOutputParser()
llm = ChatOpenAI(model="gpt-4", streaming=True)
# Each line is parsed as it arrives
for chunk in llm.stream("Generate 5 user records in ISONL format"):
records = parser.parse(chunk)
for record in records:
process(record)
LlamaIndex - ISONL Reader
from ison_parser.integrations import ISONLReader
from llama_index.core import VectorStoreIndex
# Load ISONL files (fine-tuning datasets, logs, etc.)
reader = ISONLReader()
documents = reader.load_data("training_data.isonl")
# Stream large ISONL files with constant memory
for doc in reader.lazy_load_data("large_dataset.isonl"):
process(doc)
MCP Server - ISONL Tools
# MCP Server exposes ISONL conversion tool
tools = [
"parse_isonl", # Parse ISONL text to records
"convert_to_isonl", # Convert ISON to ISONL streaming format
"stream_isonl", # Stream ISONL data incrementally
]
# Client usage
async with ISONMCPClient() as client:
isonl = await client.convert_to_isonl(ison_text)
records = await client.parse_isonl(isonl_text)
Python Core - ISONL API
from ison_parser import loads_isonl, dumps_isonl, isonl_stream
# Parse ISONL text
doc = loads_isonl(isonl_text)
# Convert to ISONL
isonl = dumps_isonl(doc)
# Stream large files (constant memory)
with open("events.isonl") as f:
for record in isonl_stream(f):
process(record)
# Convert between formats
from ison_parser import ison_to_isonl, isonl_to_ison
isonl = ison_to_isonl(ison_text)
ison = isonl_to_ison(isonl_text)
Use Cases
- Fine-tuning Datasets - SFT/DPO training data with 30-70% fewer tokens
- Event Streams - Real-time logs, clickstreams, telemetry
- Append-only Storage - Time-series data, audit logs
- Large Dataset Processing - Stream millions of records with constant memory
ISONantic Support
All integrations support optional ISONantic models for typed validation.
Defining Models
from isonantic import TableModel, ObjectModel, Field, Reference
class User(TableModel):
__ison_block__ = "table.users"
id: int = Field(primary_key=True)
name: str = Field(min_length=1, max_length=100)
email: str = Field(pattern=r".*@.*\..+")
active: bool = True
team: Optional[Reference["Team"]] = None
class Config(ObjectModel):
__ison_block__ = "object.config"
timeout: int = Field(ge=1, le=300, default=30)
debug: bool = False
Using with Any Integration
# LangChain
parser = ISONOutputParser(model=User)
# LlamaIndex
reader = ISONReader(model=User)
# OpenAI
tools = OpenAIISONTools(model=User)
users = tools.parse_response_typed(response)
# Anthropic
tools = AnthropicISONTools(model=User)
users = tools.parse_response_typed(response)
Benefits
- Type Safety - Full Python type hints and validation
- Schema-Aware Prompts - Auto-generated format instructions
- Validation - Field constraints, types, and references
- Error Recovery - Partial data recovery in non-strict mode
Installation
Core Package
pip install ison-py
With Integrations
# LangChain
pip install ison-py langchain langchain-core
# LlamaIndex
pip install ison-py llama-index llama-index-core
# MCP
pip install ison-py mcp
# OpenAI
pip install ison-py openai
# Anthropic
pip install ison-py anthropic
# ISONantic (for typed models)
pip install ison-py isonantic
# All integrations
pip install ison-py[all]