OpenOSINT - An autonomous CLI agent for OSINT orchestration using LLM function calling (github.com)
from justsouichi@programming.dev to programming@programming.dev on 09 May 06:13
https://programming.dev/post/50101823

Hey everyone,

I recently open-sourced OpenOSINT, a Python-based CLI framework designed to automate reconnaissance and threat intelligence workflows.

The architectural problem: Traditional OSINT automation usually relies on rigid bash scripts or static Python pipelines. If a tool fails, or if a specific finding requires a sudden pivot (e.g., finding an unexpected subdomain and needing to run a specific vulnerability check on it), a static pipeline simply breaks or requires massive if/else chains.

The approach: To solve this, I built an orchestrator leveraging the native tool-use/function calling APIs from Anthropic and OpenAI.

Here is how it works under the hood:

It’s strictly CLI-native and outputs structured reports.

You can check out the code and the CLI demo here: github.com/OpenOSINT/OpenOSINT

I’m looking for some technical feedback on the codebase. Specifically, I’d love to hear your thoughts on how to better optimize the context window limits when dealing with massive raw outputs (like huge DNS dumps or nmap scans) before feeding them back into the LLM’s memory.

Any architectural critiques or suggestions are welcome!

#programming

threaded - newest

sobchak@programming.dev on 09 May 07:35 collapse

Specifically, I’d love to hear your thoughts on how to better optimize the context window limits when dealing with massive raw outputs (like huge DNS dumps or nmap scans) before feeding them back into the LLM’s memory.

Launch a sub-agent that reports its findings back to the main orchestration agent. If it’s even too long to fit in the sub-agent’s context, you can chunk it up, have a sub-sub-agent per chunk report back to the sub-agent for a shorter summary, just the interesting lines per chunk, or “no relevant lines found” or whatever. Can get even fancier by allowing the sub agent to use tools like grep, head, tail, etc on the text to search it instead of reading the whole thing directly.

Surprised you’re not using LangChain/LangGraph as it makes some of the things you’re doing easier. But it looks like you’re vibecoding this anyways, so it’s just doing whatever Claude Code or whatever decides to do. My suggestion would be to code it yourself with minimal AI assistance, as this will just turn into an unmaintainable mess as time goes on, and eventually, the AI coding agent will get stuck and be unable to really progress.

Now that I think if it, you could probably get Claude Code or OpenCode to do everything this project can do by just installing all the tools needed in your environment, creating a new empty project, telling it what tools are available in your environment, allow it to download any other tools necessary, and prompting it to do the recon (may need to use an abliterate, heretic, or otherwise uncensored model to do some things).