# For coding agents This repo is a curated collection of ready-to-run OCR scripts — each one self-contained via UV inline metadata, runnable over the network via `hf jobs uv run`. No clone, no install, no setup. ## Don't rely on this doc — discover the current state This file will go stale. Prefer these sources of truth: - `hf jobs uv run --help` — job submission flags (volumes, secrets, flavors, timeouts) - `hf jobs hardware` — current GPU flavors and pricing - `hf auth whoami` — check HF token is set - `hf jobs ps` / `hf jobs logs ` — monitor running jobs - `ls` the repo to see which scripts actually exist (bucket variants especially) - [README.md](./README.md) — the table of scripts with model sizes and notes ## Picking a script The [README.md](./README.md) table lists every script with model size, backend, and a short note. Axes that matter: - **Model size** vs accuracy vs GPU cost. Smaller = cheaper per doc. - **Backend**: vLLM scripts are usually fastest at scale. `transformers` and `falcon-perception` are alternatives for specific models. - **Task support**: most scripts do plain text; some expose `--task-mode` (table, formula, layout, etc.) — check the script's own docstring. For the authoritative benchmark numbers on any model in the table, query the model card programmatically — every OCR model publishes eval results on its card: from huggingface_hub import HfApi info = HfApi().model_info("tiiuae/Falcon-OCR", expand=["evalResults"]) for r in info.eval_results: print(r.dataset_id, r.value) See the [leaderboard data guide](https://huggingface.co/docs/hub/en/leaderboard-data-guide) for the full API. This is more reliable than any markdown table that might drift. ## Getting help from a specific script Each script has a docstring at the top with a description and usage examples. To read it without downloading: curl -s https://huggingface.co/datasets/uv-scripts/ocr/raw/main/