mdstill is a document preprocessor for LLM and RAG workflows — and at its core, also a fast drag-and-drop Markdown converter. Whether you're preparing context for ChatGPT, casting a PDF into Obsidian, or just need readable markdown out of a 50-page report, mdstill handles both jobs. Where generic text extractors dump messy output, mdstill produces clean, semantic markdown that preserves tables, headings, and document structure — the things both humans and LLMs actually need. What you can do with it: • Prepare documents for RAG pipelines — chunk-ready, with semantic boundaries preserved • Feed PDFs, Word, spreadsheets, PowerPoint, EPUB and 14+ other formats into ChatGPT, Claude, or Gemini without losing tables • Build knowledge bases in Obsidian, Notion, or Logseq from existing document archives • Turn any document into clean markdown for editing, reading, or sharing — drag, drop, done • Extract structured context for AI agents and embeddings How it's different: Deep mode runs layout-aware parsing — tables, OCR, multi-column PDFs — not just text dumping. Output is up to 30% more token-efficient than raw text extraction, so LLM costs drop. REST API available for pipeline automation. Free tier with no signup required. Built for everyone — drag-and-drop for non-technical users who just need a markdown copy, REST API for engineers shipping AI features at scale.





