As many others pointed out, the released files are nearly nothing compared to the full dataset. Personally I've been fiddling a lot with OSINT and analytics over the publicly available Reddit data(a considerable amount of my spare time over the last year) and the one thing I can say is that LLMs are under-performing(huge understatement) - they are borderline useless compared to traditional ML techniques. But as far as LLMs go, the best performers are the open source uncensored models(the most uncensored and unhinged), while the worst performers are the proprietary and paid models, especially over the last 2-3 months: they have been nerfed into oblivion - to the extent where simple prompts like "who is eligible to vote in US presidential elections" is considered a controversial question. So in the unlikely event that the full files are released, I personally would look at the traditional NLP techniques long before investing any time into LLMs.
On the limited dataset: Completely agree - the public files are a fraction of what exists and I should have mentioned that it is not all files but all publicly available ones. But that's exactly why making even this subset searchable matters. The bar right now is people manually ctrl+F-ing through PDFs or relying on secondhand claims. This at least lets anyone verify what is public.<p>On LLMs vs traditional NLP: I hear you, and I've seen similar issues with LLM hallucination on structured data. That's why the architecture here is hybrid:<p>- Traditional exact regex/grep search for names, dates, identifiers
- Vector search for semantic queries
- LLM orchestration layer that must cite sources and can't generate answers without grounding
> can't generate answers without grounding<p>"can't" seems like quite a strong claim. Would you care to elaborate?<p>I can see how one might use a JSON schema that enforces source references in the output, but there is no technique I'm aware of to constrain a model to only come up with data based on the grounding docs, vs. making up a response based on pretrained data (or hallucinating one) and still listing the provided RAG results as attached reference.<p>It feels like your "can't" would be tantamount to having single-handedly solved the problem of hallucinations, which if you did, would be a billion-dollar-plus unlock for you, so I'm unsure you should show that level of certainty.
That doesn’t sound right. What model treats this as a controversial question?<p>"who is eligible to vote in US presidential elections"
what are the most unhinged and uncensored models out there?
I understand uncensored in the context of LLMs, what is unhinged? Fine tuning specifically to increase likelihood of entering controversial topics without specific prompting?
What use-cases gave you disappointing results? Did you build some kind of RAG?
The question is not how to analyze that, it's how to prosecute those who are above the law.
I keep thinking that the lack of children’s faces in the blacked out rectangles make the files much less shocking. I wonder if AI could put back fake images to make clearer to people how sick all this is.
I understand the sentiment, but I'm always very concerned when it comes to AI generating pictures of children.
You're barely scratching the surface.<p>> Mr. Gates, in turn, praised Mr. Epstein’s charm and intelligence. Emailing colleagues the next day, he said: “A very attractive Swedish woman and her daughter dropped by and I ended up staying there quite late.”<p>What if I told you that the child sitting on Epstein's lap, the teenager he French-kissed, the girl whose skin he covered with fragments from Nabokov's Lolita, the one who had an entire corridor filled with her pictures in one of his properties, who appeared in every framed photograph on his desk and whose name is on the CD-ROMs, the only woman Epstein said he would ever marry – what if that girl is the daughter Bill Gates mentions? And that she and her mother were Epstein's main romantic interests and most percussive tools?
I believe this would decrease credibility of the evidence, not increase it.
Please create a way to share conversations. I think that can be really relevant here<p>I am not a huge fan of AI but I allow this use case. This is really good in my opinion<p>Allowing the ability to share convo's, I hope you can also make those convo's be able to archived in web.archive.org/wayback machine<p>So I am thinking it instead of having some random UUID, it can have something like <a href="https://duckduckgo.com/?q=hello+test" rel="nofollow">https://duckduckgo.com/?q=hello+test</a> (the query parameter for hello test)<p>Maybe its me but archive can show all the links archived by it of a particular domain, so if many people asks queries and archives it, you almost get a database of good queries and answers. Archive features are severely underrated in many cases<p>Good luck for your project!
> I'm experiencing technical difficulties accessing the archive at the moment. The search tools are returning internal server errors.<p>looks like it’s getting hugged
This is just feeding the files into a rag db I assume? I hope? And then you can use any decent model in front of it
It would be nice to have a way to query the exposed redactions to audit which of them were in violation of the Act.
Those are going to be some spicy hallucinations.
When first reading OSS, I thought this was going to be an Office of Strategic Services AI [0] agent :)<p>[0] <a href="https://en.wikipedia.org/wiki/Office_of_Strategic_Services" rel="nofollow">https://en.wikipedia.org/wiki/Office_of_Strategic_Services</a>
...whose most famous agent, OSS 117, predates James Bond by four years btw:<p><a href="https://en.wikipedia.org/wiki/OSS_117" rel="nofollow">https://en.wikipedia.org/wiki/OSS_117</a>
Feedback: This agent didn't really work well when I tried it with a specific non-famous, but definitely publicly known individual with known connections to Epstein. I'd rather not post a specific name here. I found more documents with keyword searches. I guess it did get me to the conclusion that there wasn't much out there, but it didn't even mention stuff that showed up in name keyword searches.<p>To replicate though, you might look at the list of individuals mentioned in the brief email from Epstein to Bannon a couple weeks before Esptein died containing ~30 names and phow your engine works with each one. See how a keyword search does on library of congress vs your agent.
Thanks for testing this. The Bannon email from June 30, 2019 is in there (HOUSE_OVERSIGHT_029622). Good stress test idea.<p>Couple things happening:<p>Semantic search limitation: Less-famous names don't have strong embeddings, so it defaults to general connections rather than specific mentions
Keyword search gap: You're right — raw grep can catch exact names I'm missing
I saw a similar problem. Roger Schank had some conversations with Epstein and the emails can be seen in Epsteinvisualizer.com but your site claimed there was no emails or connection. To be fair to Roger, who was an AI legend of his time and someone I knew personally before his untimely death, he really was not a pedo, and most likely never got involved with the girls, I think him and Epstein just talked about AI and education mostly.
Why the heck does this start with some sort of video bullshit?
And what did you learn?
In 2024, Trump used Epstein's former private jet for campaign appearances
Trump famously told New York Magazine in 2002: "I've known Jeff for 15 years. Terrific guy. He's a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side."<p>Trump and Epstein were social acquaintances in Palm Beach and New York circles during the 1990s-early 2000s. They socialized together at Mar-a-Lago and other venues
Is it able to handle a much larger dataset? Only a tiny fraction of data has been release from what is looks like.
Does this work with vector embeddings?
Reminder that only 1-2% of the files have been released.
Super Cool!
This is a good idea. One thing I never understand about these kinds of projects though: why are the standard questions provided to the user as prompts never cached?
oh forgot about it, thanks. just a funny project i build in couple hours so didnt really sweat haha
Outputs are usually generated with random sampling, so the same prompt may get different outputs.
Not sure if this is possible but it should be known there is a COMPLETE INDEX to the original Epstein Files<p>(not including the new millions upon millions of documents and photos)<p><a href="https://storage.courtlistener.com/recap/gov.uscourts.nysd.474895/gov.uscourts.nysd.474895.40.1.pdf" rel="nofollow">https://storage.courtlistener.com/recap/gov.uscourts.nysd.47...</a><p>from a 2017 FOIA they had to provide it<p><a href="https://www.bloomberg.com/news/newsletters/2025-08-08/here-s-a-look-at-what-the-fbi-s-epstein-files-would-reveal" rel="nofollow">https://www.bloomberg.com/news/newsletters/2025-08-08/here-s...</a><p>Might be possible for machine-learning to determine what is missing?<p>(which is basically 99% missing as we already know less than 1% released)
<i>can search the entire Epstein files</i><p>It's worth noting that only about 1% of the files have been released, according to the DOJ.<p>Of the released files, many have redactions.
Yep, they failed to meet the deadlines required by law, and it's not just any redactions either, but <i>unlawful</i> redactions.
If the Lake Michigan thing is just in the first 1%, then whatever's in the other 99% is going to be absolutely disgusting.
I searched it with the tool but nothing came up about Lake Michigan. What happened?
I would expect a large portion of the remaining records to be internal emails about memos about the process of building a case around evidence, rather than the root evidence itself.<p>Not that that would excuse the administration's unlawful behavior so far, or indicate the unreleased 99% can't have some big bombshells.
sorry all publicly available files *
[dead]
[dead]
[flagged]
Ah, yes. Post is an LLM-something project: top comment is a general critique of LLMs. Waiting for this to get old. Meanwhile, at least you get points for being funny.
<p><pre><code> > + '' * n
</code></pre>
This looks like what you'd get from using text-davinci-003 as the model in your AI-assisted IDE
I think it looks like what you get by writing code and making a typo.
no - the utf8 black box was removed by hackernews. thanks for noticing.<p>Can't edit it anymore, but it would be "\u25A0" * n
All these attempts looks like emulation of "Pen (software) is mightier than Sword" or that only if more people believed in the cause, we would be close to resolution.<p>Remember folks, soft power is nothing in front of hard power.