2 comments

  • codethief1 hour ago
    Came here because I thought this might be related to <a href="https:&#x2F;&#x2F;git.sr.ht&#x2F;~alip&#x2F;syd" rel="nofollow">https:&#x2F;&#x2F;git.sr.ht&#x2F;~alip&#x2F;syd</a> &#x2F; <a href="https:&#x2F;&#x2F;gitlab.exherbo.org&#x2F;sydbox&#x2F;sydbox" rel="nofollow">https:&#x2F;&#x2F;gitlab.exherbo.org&#x2F;sydbox&#x2F;sydbox</a> , which has been discussed here on HN various times over the years.
    • paul249546 minutes ago
      Thanks for the links different project though. Those are sandboxing and syscall-monitoring tools, while my Syd is an offline AI assistant built for security workflows (DFIR, pentesting, malware triage, tool-output reasoning, etc.).<p>Completely unrelated codebases, just happens to share the same name.
  • paul24951 hour ago
    Author here. Happy to answer questions!<p>A bit more context on how Syd works: it uses Dolphin Llama 3 (dolphin-2.9-llama3-8b) running locally via llama-cpp-python. You&#x27;ll need about 12-14GB RAM when the model is loaded, plus ~8GB disk space for the base system (models, FAISS index, CVE database). The full exploit database is an optional 208GB add-on.<p>What makes this different from just wrapping an LLM, the core challenge wasn&#x27;t the AI—it was making security tools output data that an LLM can actually understand tools like YARA, Volatility, and Nmap output unstructured text with inconsistent formats. I built parsers that convert this into structured JSON, which the LLM can then reason about intelligently. Without that layer, you get hallucinations and garbage analysis.<p>Current tool integrations: - Red Team: Nmap (with CVE correlation), Metasploit, Sliver C2, exploit database lookup - Blue Team: Volatility 3 (memory forensics), YARA (malware detection), Chainsaw (Windows event log analysis), PCAP analysis, Zeek, Suricata - Cross-tool intelligence: YARA detection → CVE lookup → patching steps; Nmap scan → Metasploit modules ready-to-run commands<p>The privacy angle exists because I couldn&#x27;t paste potential malware samples, memory dumps, or customer network scans into ChatGPT without violating every security policy. Everything runs on localhost:11434—no data ever leaves your machine. For blue teamers handling sensitive investigations or red teamers on client networks, this is non-negotiable.<p>Real-world example from the demo syd scans a directory with YARA, hits on a custom ransomware rule, automatically looks up which CVE was exploited(EternalBlue&#x2F;MS17-010), explains the matched API calls, and generates an incident response workflow—all in about 15 seconds. That beats manual analysis by a significant margin.<p>What I&#x27;d love feedback on:<p>1. Tool suggestions: What other security tools would you want orchestrated this way? I&#x27;m looking at adding Capa(malware capability detection) and potentially Ghidra integration. 2. For SOC&#x2F;IR folks: How are you currently balancing AI utility with operational security? Are you just avoiding LLMs entirely, or have you found other solutions? 3. Beta testers: If you&#x27;re actively doing red&#x2F;blue team work and want to try this on real investigations, I&#x27;m looking for people to test and provide feedback. Especially interested in hearing what breaks or what features are missing.<p><pre><code> The goal isn&#x27;t to replace your expertise—it&#x27;s to automate the tedious parts (hex decoding, correlating CVEs,explaining regex patterns) so you can focus on the actual analysis. Think of it as having a junior analyst who never gets tired of looking up obscure Windows API calls. Check out sydsec.co.uk for more info, or watch the full demo at the YouTube link in the original post.</code></pre>