Installation
Python
pip install ison-py
TypeScript / JavaScript
npm install ison-js
# or
yarn add ison-js
Features: Full TypeScript types, zero dependencies, works in Node.js and browsers
Rust
cargo add ison-rs
# Or add to Cargo.toml:
[dependencies]
ison-rs = "1.0"
Features: Zero-copy parsing, optional serde integration, no unsafe code
Go
go get github.com/maheshvaikri-code/ison/ison-go
Features: Full ISON support, ISONantic validation, idiomatic Go API
C++ (Header-Only)
A C++11-compatible header-only library, perfect for llama.cpp and other C++ projects.
# Option 1: Copy the header
cp ison_parser.hpp /your/project/include/
# Option 2: CMake
add_subdirectory(ison-cpp)
target_link_libraries(your_target PRIVATE ison::parser)
Features: Full ISON support, ISONL streaming, JSON export, C++11 minimum
Quick Example
Here's a simple example to get you started:
import ison_parser
# Your structured data
data = {
"products": [
{"id": 1, "name": "Widget", "price": 29.99},
{"id": 2, "name": "Gadget", "price": 49.99},
{"id": 3, "name": "Gizmo", "price": 19.99}
]
}
# Convert to ISON
doc = ison_parser.from_dict(data)
ison_text = ison_parser.dumps(doc)
print(ison_text)
Output:
table.products
id name price
1 Widget 29.99
2 Gadget 49.99
3 Gizmo 19.99
Using with OpenAI API
The most common use case is sending ISON context to LLM APIs:
import openai
import ison_parser
def call_llm_with_context(query, context_data):
# Convert context to ISON
ison_context = ison_parser.dumps(
ison_parser.from_dict(context_data)
)
# Call OpenAI with ISON in message content
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": f"""Context:
{ison_context}
Question: {query}
Please answer using the context provided."""
}
]
)
return response.choices[0].message.content
# Usage
products = {
"products": [
{"id": 1, "name": "Widget", "price": 29.99, "stock": 15},
{"id": 2, "name": "Gadget", "price": 49.99, "stock": 8}
]
}
answer = call_llm_with_context(
"Which product is cheaper?",
products
)
print(answer)
Using with Claude (Anthropic)
ISON works seamlessly with Claude's API and is especially powerful for agentic workflows:
import anthropic
import ison_parser
def call_claude_with_context(query, context_data):
# Convert to ISON - more efficient context representation
ison_context = ison_parser.dumps(
ison_parser.from_dict(context_data)
)
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
system=f"""You are a helpful assistant.
Available context (in ISON format - a compact data format):
{ison_context}""",
messages=[
{"role": "user", "content": query}
]
)
return response.content[0].text
# Example: Customer support with user context
user_context = {
"user": {"id": "u123", "name": "Alice", "tier": "premium"},
"recent_orders": [
{"id": "ord_1", "product": "Widget Pro", "status": "shipped"},
{"id": "ord_2", "product": "Gadget Plus", "status": "delivered"}
],
"support_tickets": [
{"id": "t_1", "issue": "Shipping delay", "resolved": True}
]
}
answer = call_claude_with_context(
"When will my Widget Pro arrive?",
user_context
)
# The compact ISON format leaves more room for the LLM's reasoning
Parsing ISON
You can also parse ISON strings back into Python objects:
# Parse ISON text
ison_text = """
table.users
id name email active
1 Alice alice@example.com true
2 Bob bob@example.com false
"""
doc = ison_parser.loads(ison_text)
# Access blocks
users = doc['users']
# Iterate through rows
for row in users.rows:
if row['active']:
print(f"Active user: {row['name']}")
# Convert to JSON
json_output = doc.to_json()
print(json_output)
Working with References
ISON supports references for graph-like data:
ison_text = """
table.teams
id name
10 AI-Research
20 Platform
table.users
id name team
101 Alice :10
102 Bob :10
103 Carol :20
"""
doc = ison_parser.loads(ison_text)
for user in doc['users'].rows:
team_ref = user['team']
print(f"{user['name']} is in team {team_ref.id}")
RAG Pipeline Example
A complete example for a RAG system:
import openai
import ison_parser
from your_vector_db import search
def rag_query(question: str):
# 1. Retrieve similar documents
results = search(question, top_k=5)
# 2. Format as ISON
context = {
"retrieved_documents": [
{
"id": r.id,
"content": r.content,
"score": r.score,
"source": r.metadata.get("source")
}
for r in results
]
}
ison_context = ison_parser.dumps(
ison_parser.from_dict(context)
)
# 3. Generate answer
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": "Answer questions using the provided context."
},
{
"role": "user",
"content": f"""Context:
{ison_context}
Question: {question}
Provide a detailed answer with citations."""
}
]
)
return response.choices[0].message.content
# Usage
answer = rag_query("What is ISON?")
print(answer)
CLI Usage
The parser also includes a command-line interface:
# Validate ISON file
python ison_parser.py data.ison --validate
# Convert ISON to JSON
python ison_parser.py data.ison -f json -o output.json
# Normalize ISON file
python ison_parser.py data.ison -f ison -o clean.ison
Best Practices
- Use ISON at the LLM boundary - Convert right before sending to the LLM, not earlier in your pipeline
- Keep JSON for internal APIs - Use JSON for service-to-service communication; ISON for LLM communication
- Quote when needed - Strings with spaces or special characters should be quoted
- Normalize arrays - Convert arrays of objects into separate tables with references
- Use references for relationships - Instead of nesting, use
:IDnotation