Quick start

pip install openai pyyaml
export OPENAI_API_KEY="sk-your-key"
cd src/araw

# Start a new exploration
python auto_expand_llm.py --db my_analysis.db --seed "Should I start a business?"

# Continue expanding
python auto_expand_llm.py --db my_analysis.db --continue --parallel 5

# View in browser
python visualize.py --db my_analysis.db --serve
# Open http://localhost:8080

Interactive visualization

visualize.py --serve launches a full interactive graph viewer in your browser. You can:

Core tools

araw_engine.py

SQLite-based ARAW engine. Create searches, branch claims, query by strategy, export trees. Used as a library by all other tools and as a standalone CLI.

python araw_engine.py create "I need to change careers" my_search.db
python araw_engine.py stats my_search.db
python araw_engine.py export my_search.db

auto_expand_llm.py

Auto-expands ARAW trees using an LLM. Generates meaningful branches with leverage scores, crux detection, and domain classification across 18 domains.

python auto_expand_llm.py --db career.db --seed "I should quit my job"
python auto_expand_llm.py --db career.db --continue --parallel 5 --max-depth 6

auto_expand.py

Pattern-based expansion that doesn't require an API key. Uses keyword matching to generate alternatives. Good for quick prototyping.

python auto_expand.py --db quick.db --seed "I must learn to code" --duration 60

visualize.py

Launches a Sigma.js graph viewer in the browser, or exports to JSON/GEXF for external tools.

python visualize.py --db my.db --serve
python visualize.py --db my.db --export graph.json
python visualize.py --db my.db --export graph.gexf --format gexf

Analysis tools

synthesize.py

Analyzes multiple ARAW databases to find common themes, extract actionable items, and detect contradictions.

python synthesize.py *.db
python synthesize.py *.db --extract-actions actions.json
python synthesize.py *.db --find-tensions

grounding.py

Connects abstract claims to real-world evidence with epistemic quality tracking. Prioritizes by upstream influence, convergence, and decision-relevance.

python grounding.py --db my.db --prioritize
python grounding.py --db my.db --ground-top 10
python grounding.py --db my.db --integrate

evidence_engine.py

Systematic evidence gathering using progressively broader sources (Wikipedia, arXiv, PubMed, World Bank, LLM synthesis).

python evidence_engine.py --claim "Global poverty has decreased"
python evidence_engine.py --test-sources

commitment_analyzer.py

Determines which claims can be committed to (all alternatives contradict) vs. which remain guesses (coherent alternatives exist).

python commitment_analyzer.py --db my.db analyze
python commitment_analyzer.py --db my.db foundations

Conversion and bridging

synthesis.py

Ask natural language questions about an ARAW tree. Get answers grounded in the tree's data.

python synthesis.py --db my.db "What are the key cruxes?"
python synthesis.py --db my.db "Where do branches converge?"

md_to_sqlite.py

Convert conversational ARAW sessions (saved as markdown) into SQLite format.

python md_to_sqlite.py session.md
python md_to_sqlite.py sessions/*.md -o all.db

bridge_to_gosm.py

Export findings as GOSM-compatible YAML. Extracts crux nodes, high-impact items, and verification targets.

python bridge_to_gosm.py database.db --output assumptions.yaml

Database

All data is stored in SQLite. Query directly:

# High-leverage unexplored nodes
sqlite3 my.db "SELECT claim, leverage_score FROM nodes
  WHERE status='unexplored' ORDER BY leverage_score DESC LIMIT 10"

# Crux nodes
sqlite3 my.db "SELECT claim FROM nodes
  WHERE json_extract(content, '$.is_crux') = 1"

# Tree stats
sqlite3 my.db "SELECT COUNT(*) as total,
  SUM(CASE WHEN status='explored' THEN 1 ELSE 0 END) as explored,
  MAX(depth) as max_depth FROM nodes"

Full documentation: src/README.md on GitHub →