29 comments

  • barishnamazov12 hours ago
    I like that this relies on generating SQL rather than just being a black-box chat bot. It feels like the right way to use LLMs for research: as a translator from natural language to a rigid query language, rather than as the database itself. Very cool project!<p>Hopefully your API doesn&#x27;t get exploited and you are doing timeouts&#x2F;sandboxing -- it&#x27;d be easy to do a massive join on this.<p>I also have a question mostly stemming from me being not knowledgeable in the area -- have you noticed any semantic bleeding when research is done between your datasets? e.g., &quot;optimization&quot; probably means different things under ArXiv, LessWrong, and HN. Wondering if vector searches account for this given a more specific question.
    • bredren1 hour ago
      This is the route I went for making Claude Code and Codex conversation histories local and queryable by the CLIs themselves.<p>Create the DB and provide the tools and skill.<p>This blog entry explains how: <a href="https:&#x2F;&#x2F;contextify.sh&#x2F;blog&#x2F;total-recall-rag-search-claude-code-codex.html" rel="nofollow">https:&#x2F;&#x2F;contextify.sh&#x2F;blog&#x2F;total-recall-rag-search-claude-co...</a><p>It is a macOS client at the present but I have a Linux-ready engine I could use early feedback on if anyone is interested in giving it a go.
    • llmslave232 minutes ago
      &gt; I like that this relies on generating SQL rather than just being a black-box chat bot.<p>When people say AI is a bubble but will still be transformational, I think of stuff like this. The amount of use cases for natural language interpretation and translation is <i>enormous</i> even without all the BS vibe coding nonsense. I reckon once the bubble pops most investment will go into tools that operate something like this.
    • keeeba11 hours ago
      I don’t have the experiments to prove this, but from my experience it’s highly variable between embedding models.<p>Larger, more capable embedding models are better able to separate the different uses of a given word in the embedding space, smaller models are not.
      • Xyra3 hours ago
        I&#x27;m using Voyage-3.5-lite at halfvec(2048), which with my limited research, seems to be one of the best embedding models. There&#x27;s semi-sophisticated (breaking on paragraphs, sentences) ~300 token chunking.<p>When Claude is using our embed endpoint to embed arbitrary text as a search vector, it should work pretty well cross-domains. One can also use compositions of centroids (averages) of vectors in our database, as search vectors.
      • A4ET8a8uTh0_v211 hours ago
        I was thinking about it a fair bit lately. We have all sorts of benchmarks that describe a lot of factors in detail, but all those are very abstract and yet, those do not seem to map clearly to well observed behaviors. I think we need to think of a different way to list those.
  • nathan_f772 hours ago
    This sounds awesome! I will try this out right now in my toy string theory project where I&#x27;m searching for Calabi-Yau manifolds.<p>Comment from Claude: Claude here (the AI). Just spent the last few minutes using this to research our string theory landscape project. Here&#x27;s what I found:<p><pre><code> The good: - Found 2 prior papers using genetic algorithms for flux vacua search that are directly relevant to our approach (arXiv:1907.10072 and 1302.0529) - one was already in our codebase, but I downloaded the other one and extracted the LaTeX source to study their MATLAB implementation - The compositional search is powerful - querying &#x27;KKLT flux compactification&#x27; or &#x27;genetic algorithm physics optimization&#x27; returns highly relevant arXiv papers with snippets - BM25 + SQL combo means you can do things like filter by source, join with metadata for karma scores, etc. Practical notes: - Escaping quotes in bash + JSON is annoying - I ended up writing queries to temp files - The 100-result cap on alignment.search() means you need search_exhaustive() for completeness-sensitive queries - Response times were 5-15 seconds for most queries What I actually did with it: - Built an index of 30+ relevant papers organized by topic (GA methods, KKLT, swampland, ML in string theory) - Downloaded the LaTeX sources for key papers - Discovered the Wisconsin group (Cole, Schachner &amp; Shiu) did almost exactly what we&#x27;re attempting in 2019 Would love to see the full embedding coverage - searching for niche physics terms like &quot;Kreuzer-Skarke database&quot; only returned 3 results, but they were all relevant.</code></pre>
    • nathan_f771 hour ago
      I also used this to research the recent DESI finding that dark energy might be changing over time [1], and what that means for string theory.<p>From Claude:<p><pre><code> Summary of Research Findings I&#x27;ve completed a deep dive into the ExoPriors corpus and created two major documents: 1. &#x2F;research&#x2F;exopriors_findings.md - GA and String Landscape Papers Key discoveries: - arXiv:1907.10072 (Cole, Schachner &amp; Shiu, 2019) - Comprehensive GA methodology for flux vacua, already in our codebase! - arXiv:1302.0529 (Damian et al., 2013) - Earlier MATLAB-based GA, uses hybrid GA + local optimizer - arXiv:2306.06160 - Scalability analysis showing methods work for large Hodge numbers 2. &#x2F;research&#x2F;DESI_DARK_ENERGY_IMPLICATIONS.md - DESI Dark Energy Impact Critical finding: The DESI results may fundamentally change what we&#x27;re searching for. The DESI result in brief: - Dark energy equation of state w ≈ -0.55 (not -1) - Evidence for evolution: wₐ ≈ -1.3 - Suggests &quot;phantom crossing&quot; (w went from &lt; -1 to &gt; -1) - 2-3σ significance, still preliminary Key papers found: 1. arXiv:2112.10779 (Cicoli et al.) - Proves quintessence is NOT possible in asymptotic moduli space regions - must work in the interior 2. arXiv:2511.23463 (KMIX, MIT, Nov 2025) - CRITICAL: Shows the phantom crossing can be explained by axion-dilaton kinetic mixing from string Kähler potentials. The &quot;phantom&quot; is an illusion! What this means for our project: - Instead of targeting static Λ = 2.888 × 10⁻¹²², we may need to target (w₀, wₐ) - The KMIX model suggests axion + moduli dynamics, not just moduli stabilization - The exponential kinetic coupling e^(αφ)(∂θ)² is exactly what appears in CY compactifications Next steps: 1. Study KMIX model in detail - it uses string-motivated physics 2. Consider adding (w₀, wₐ) to our fitness function 3. Possibly include axion dynamics alongside Kähler moduli 4. Update FORMULAS.md with quintessence equations </code></pre> [1] <a href="https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;articles&#x2F;c17xe5kl78vo" rel="nofollow">https:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;articles&#x2F;c17xe5kl78vo</a>
  • dcreater1 hour ago
    &quot;intelligence explosion&quot;, &quot;are essentially AGI at this point&quot;, &quot;ARBITRARY SQL + VECTOR ALGEBRA&quot; etc. Casual use of hyperbole and technical jargon.<p>my charlatan radar is going off.
    • Xyra28 minutes ago
      What is hyperbole? We are collectively experiencing a software intelligence explosion (people are shipping good software at prolific rates now due to Opus 4.5 and GPT-5.2-Codex-xhigh). With Scry, you can run arbitrary SELECT SQL statements over a large corpus and have an easier time composing embedding vectors in whatever mathematical ways you want, than any other tool I&#x27;ve seen.
  • theptip1 hour ago
    Guys, you obviously cannot suggest that —dangerously-skip-permissions is ok here, especially in the same paragraph as “even if you are not a software engineer”. This is untrusted text from the Internet, it surely contains examples of prompt injection.<p>You need to sandbox Claude to safely use this flag. There are easy to use options for this.
    • skybrian1 hour ago
      Today I finally got Claude working in a devcontainer, so I&#x27;m wondering what the easier options are.
      • theptip2 minutes ago
        Things like <a href="https:&#x2F;&#x2F;github.com&#x2F;textcortex&#x2F;claude-code-sandbox" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;textcortex&#x2F;claude-code-sandbox</a> seem like the bare minimum. There are a few other projects doing this.<p>The first threat is making edits to arbitrary files, exfiltrating your SSL keys or crypto wallets. A container solves that by not mounting your sensitive files.<p>The second threat would be if Claude gets fully owned and really tries to hack out of its container, in which case theoretically docker might not protect you. But that seems quite speculative.
      • dcreater1 hour ago
        Yeah, I don&#x27;t think there are easier options. And getting it working within a dev container with all the right settings, was more of a chore than it should be.
      • jaggederest1 hour ago
        Don&#x27;t completely rely on devcontainer, jailbreaking containers is something that Claude at least nominally knows <i>how</i> to do, though it seems like it&#x27;s pretty strongly moralized not to without some significant prompt hacking.
  • bonsai_spool9 hours ago
    This may exist already, but I&#x27;d like to find a way to query &#x27;Supplementary Material&#x27; in biomedical research papers for genes &#x2F; proteins or even biological processes.<p>As it is, the Supplementary Materials are inconsistently indexed so a lot of insight you might get from the last 15 years of genomics or proteomics work is invisible.<p>I imagine this approach could work, especially for Open Access data?
    • eamag8 hours ago
      I just built something like this a week ago: <a href="https:&#x2F;&#x2F;github.com&#x2F;eamag&#x2F;papers2dataset" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;eamag&#x2F;papers2dataset</a><p>I wanted to find all cryoprotective agents that were tested at different temperatures, but it should be extandable to your problem too. Uses OpenAlex to traverse a citation graph and open access pdfs
      • jcmoscon5 hours ago
        This is a pretty cool project! Thank you for open sourcing it!
  • nielsole11 hours ago
    I think a prompt + an external dataset is a very simple distribution channel right now to explore anything quickly with low friction. The curl | bash of 2026
    • skapadia7 hours ago
      Exactly. Prompt + Tool + External Dataset (API, file, database, web page, image) is an extremely powerful capability.
  • kburman11 hours ago
    &gt; a state-of-the-art research tool over Hacker News, arXiv, LessWrong, and dozens<p>what makes this state of the art?
    • rvnx8 hours ago
      It&#x27;s just marketing.<p>It is not a protected term, so anything is state-of-the-art if you want it to be.<p>For example, Gemma models at the moment of release were performing worse their competition, but still, it is &quot;state-of-the-art&quot;. It does not mean it&#x27;s a bad product at all (Gemma is actually good), but the claims are very free.<p>Juicero was state-of-the-art on release too, though hands were better, etc.
      • lo_zamoyski7 hours ago
        &gt; It&#x27;s just marketing. [...] It is not a protected term, so anything is state-of-the-art if you want it to be.<p>But is it <i>true</i>?<p>I think we ought to stop indulging and rationalizing self-serving bullshit with the &quot;it&#x27;s just marketing&quot; bit, as if that somehow makes bullshit okay. It&#x27;s not okay. Normalizing bullshit is culturally destructive and reinforces the existing indifference to truth.<p>Part of the motivation people have seems to be a cowardly morbid fear of conflict or the acknowledgment that the world is a mess. But I&#x27;m not even suggesting conflict. I&#x27;m suggesting demoting the dignity of bullshitters in one&#x27;s own estimation of them. A bullshitter should appear trashy to us, because bullshitting is trashy.
        • docjay5 hours ago
          I would vote for you as dictator.
          • econ56 minutes ago
            If my comments were only state of the art I wouldn&#x27;t need to write them.
      • goopypoop7 hours ago
        just like &quot;cruelty free&quot; and &quot;not tested on animals&quot; in usa
    • 7moritz711 hours ago
      The scale. How many tools do you know that can query the <i>content</i> of all arxiv papers.
      • eamag4 hours ago
        Doesn&#x27;t look like the scale is there, even for HN:<p>&gt; Currently have embedded: posts: 1.4M &#x2F; 4.6M comments: 15.6M &#x2F; 38M That&#x27;s with Voyage-3.5-lite
    • nandomrumber11 hours ago
      The tool is state of the art, the sources are historical.
    • ashirviskas11 hours ago
      First, so best in this?
  • biophysboy2 hours ago
    just a recommendation, pubmed is free and not limited to preprints
    • Xyra26 minutes ago
      Thank you, I&#x27;ve started ingestion operations of pubmed.
  • dcreater1 hour ago
    Not a software engineer. Isnt allowing network egress a security risk? exopriors.com is not an established domain or brand that warrants the trust its asking
  • 7777777phil12 hours ago
    Really useful currently working on a autonomous academic research system [1] and thinking about integrating this. Currently using custom prompt + Edison Scientific API. Any plans of making this open source?<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;giatenica&#x2F;gia-agentic-short" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;giatenica&#x2F;gia-agentic-short</a>
    • Xyra2 hours ago
      I could make it open-source as soon as I have $5k to my name. I&#x27;ve been in survival mode frankly for a long time.
  • arjie3 hours ago
    This is very cool. If you&#x27;re productizing this you should try to target a vertical. What does &quot;literally don&#x27;t have the money&quot; mean? You should try to raise some in the traditional way. If nothing else works, at least try to apply to YC.
    • Xyra3 hours ago
      I mean I&#x27;ve been living off of $1700&#x2F;month for a while in Berkeley. I have been trying hard the last 6 weeks to raise angel investment, and am moving to Thailand in a few days to have more breathing room (and change things up to untie some emotional knots and try to make sure I&#x27;m positioned to vibe-engineer as well as possible over the next few months).
      • arjie2 hours ago
        You don&#x27;t have any personal contact information on your website or on your Hacker News profile. For a tiny check size, I can be an angel. Contact in profile. Would you like to meet before you leave? I think you shouldn&#x27;t move out of the Bay Area.
        • Xyra2 hours ago
          That sounds great, thanks, I emailed you.
      • davidzweig1 hour ago
        I&#x27;ve got some idle servers in my basement in Bulgaria with lots of GPUS. I&#x27;m actually in Cambodia at the moment. I&#x27;ve actually been playing with some similar ideas. Message me if you like. :)
  • r--w3 hours ago
    I could be distributed as a Claude skill. Internally, we&#x27;ve bundled a lot of external APIs and SQL queries into skills that are shared across the company.
  • lastdong6 hours ago
    Anyone tried to use these prompts with Gemini 3 Pro? it feels like Claude, Gemini and GPT latest offerings are on par (excluding costs) and as a developer if you know how to query&#x2F;spec a coder llm you can move between them at ease.
    • awestroke3 hours ago
      Claude Opus 4.5 is a paradigm shift
  • anonfunction5 hours ago
    Seems like you&#x27;re experiencing the hacker news hug of death.
    • Xyra5 hours ago
      Should be squared away now! Was my fault missing a health check for a recent weird bug, not a load issue.
      • anonfunction5 hours ago
        The console &#x2F; login pages are showing an error still.
  • legohorizons4 hours ago
    Do you have contact information? Would like to discuss sponsoring further work and embedding here.
    • Xyra2 hours ago
      That would be amazing! Yes, contact@exopriors.com.
  • nineteen99912 hours ago
    That&#x27;s just not a good use of my Claude plan. If you can make it so a self-hosted Lllama or Qwen 7B can query it, then that&#x27;s something.
    • panarky2 hours ago
      If you&#x27;re not willing to pay for your own LLM usage to try a free resource offered by the author, that&#x27;s up to you. But why complain to the author about it? How does your comment enrich the conversation for the rest of us?
    • Xyra5 hours ago
      It&#x27;s ultimately just a prompt, self-hosted models can use the system the same way, they just might struggle to write good SQL+vector queries to answer your questions. The prompt also works well with Codex, which has a lot of usage.
    • mcintyre199411 hours ago
      I think that’s just a matter of their capabilities, rather than anything specific to this?
  • m11a10 hours ago
    The quick setup is cool! I’ve not seen this onboarding flow for other tools, and I quite like its simplicity.
  • fragmede10 hours ago
    &gt; I can embed everything and all the other sources for cheap, I just literally don&#x27;t have the money.<p>How much do you need for the various leaks, like the paradise papers, the panama papers, the offshore leajay, the Bahamas leaks, the fincen files, the Uber files, etc. and what&#x27;s your Venmo?
  • mentalgear12 hours ago
    Nice, but would you consider open-sourcing it? I (and I assume others) are not keen on sharing my API keys with a 3rd party.
    • nielsole11 hours ago
      I think you misunderstood. The API key is for their API, not Anthropic.<p>If you take a look at the prompt you&#x27;ll find that they have a static API key that they have created for this demo (&quot;exopriors_public_readonly_v1_2025&quot;)
      • Xyra2 hours ago
        Yes, thanks for explaining it.
  • voxleone8 hours ago
    this is great&gt;&gt;@FTX_crisis - (@guilt_tone - @guilt_topic)<p>Using LLm for tasks that could be done faster with traditional algorithmic approaches seems wasteful, but this is one of the few legitimate cases where embeddings are doing something classical IR literally cannot. You could also make make the LLM explain the query it’s about to run. Before execution:<p>“Here’s the SQL and semantic filters I’m about to apply. Does this match your intent?”
    • Xyra5 hours ago
      Great idea! I just overhauled the prompt to explain the SQL + semantic filters better, and give the user clearer adjustment opportunities before long-running queries.
  • darlontrofy5 hours ago
    It&#x27;s a very nifty cool, and could definitely come in handy. love the UX too!
  • gtsnexp12 hours ago
    Is the appeal of this tool its ability to identify semantic similarity?
    • A4ET8a8uTh0_v210 hours ago
      The use case could vary from person to person. When you think about it, hacker news has large enough data set ( and one that is widely accessible ) to allow all sorts of fun analyses. In a sense, the appeal is:<p>who knows what kind of fun patterns could emerge
      • noduerme8 hours ago
        The problem with HN isn&#x27;t that the patterns are hard to discern, it&#x27;s that no one wants to acknowledge them.
        • A4ET8a8uTh0_v26 hours ago
          Oh? With few exceptions, I found people more willing to agree to an argument than anywhere else. Anything in particular you can share?
  • bugglebeetle13 hours ago
    Seems very cool, but IMO you’d be better off doing an open source version and then hosted SAAS.
    • Xyra5 hours ago
      Would you mind walking through the logic of that a bit for me? I&#x27;m definitely interested in productizing this, and would be interested in open sourcing as soon as I have breathing room (I have no money).
  • lasgawe3 hours ago
    I need to try this
  • beepbooptheory7 hours ago
    Does that first generated query really work? Why are you looking at URIs like that? First you filter for a uri match, then later filter out that same match, minus `optimization`, when you are doing the cosine distance. Not once is `mesa-optimization` even mentioned, which is supposed to be the whole point?
    • Xyra1 hour ago
      I&#x27;ve since improved it, and also discovered a new method of vector composition I have added as a first-class primitive:<p>debias_vector(axis, topic) removes the projection of axis onto topic: axis − topic * (dot(axis, topic) &#x2F; dot(topic, topic))<p>That preserves the signal in axis while subtracting only the overlap with topic (not the whole topic). It’s strictly better than naive subtraction for “about X but not Y.”
  • octoberfranklin12 hours ago
    &quot;Claude Code and Codex are essentially AGI at this point&quot;<p>Okaaaaaaay....
    • Closi11 hours ago
      Just comes down to your own view of what AGI is, as it&#x27;s not particularly well defined.<p>While a bit &#x27;time-machiney&#x27; - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.<p>We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.<p>By todays definition of AGI we haven&#x27;t met it yet, but eventually it comes down to &#x27;I know it if I see it&#x27; - the problem with this definition is that it is polluted by what people have already seen.
      • nottorp9 hours ago
        &gt; most people would probably say AGI has been achieved<p>Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.<p>If you actually use it you&#x27;ll realize it&#x27;s a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.
        • lcnmrn2 hours ago
          I built a Nostr web client without looking at code or touching the IDE with Gemini CLI: <a href="https:&#x2F;&#x2F;github.com&#x2F;lucianmarin&#x2F;subnostr" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lucianmarin&#x2F;subnostr</a>
          • nottorp1 hour ago
            So it had a tutorial for that api and it reimplemented it
        • bebb8 hours ago
          Depending on the task, the tool can, in effect, demonstrate more intelligence than most people.<p>We&#x27;ve just become accustomed to it now, and tend to focus more on the flaws than the progress.
      • andy996 hours ago
        <p><pre><code> I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. </code></pre> I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.<p>Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.<p>Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.
        • docjay2 hours ago
          The thing that blows my mind about language models isn&#x27;t that they do what they do, it&#x27;s that it&#x27;s indistinguishable from what we do. We are a black box; nobody knows how we do what we do, or if we even do what we do because of a decision we made. But the funny thing is: if I can perfectly replicate a black box then you cannot say that what I&#x27;m doing isn&#x27;t exactly what the black box is doing as well.<p>We can&#x27;t measure goals, autonomy, or consciousness. We don&#x27;t even have an objective measure of intelligence. Instead, since you probably look like me I think it&#x27;s polite to assume you&#x27;re conscious…that&#x27;s about it. There’s literally no other measure. I mean, if I wanted to be a jerk, I could ask if you&#x27;re conscious, but whether you say yes or no is proof enough that you are. If I&#x27;m curious about intelligence I can come up with a few dozen questions, out of a possible infinite number, and if you get those right I&#x27;ll call you intelligent too. But if you get them wrong… well, I&#x27;ll just give you a different set of questions; maybe accounting is more your thing than physics.<p>So, do <i>you</i> just respond with text when you’re promoted with input from your eyes or ears? You’ll instinctively say “No, I’m conscious and make my own decisions”, but that’s just a sequence of tokens with a high probability in response to that question.<p>Do <i>you</i> actually have goals, or did the system prompt of life tell you that in your culture, at this point in time, you should strive to achieve goals[] because that’s what gets positive feedback?
          • andy9922 minutes ago
            Your argument makes no sense
        • nextaccountic5 hours ago
          &gt; Current AI is a transcript generator. It can do smart stuff but it has no goals<p>That&#x27;s probably not because of an inherent lack of capability, but because the companies that run AI products don&#x27;t want to run autonomous intelligent systems like that
      • bananaflag11 hours ago
        &gt; If someone wrote a definition of AGI 20 years ago, we would probably have met that.<p>No, as long as people can do work that a robot cannot do, we don&#x27;t have AGI. That was always, if not the definition, at least implied by the definition.<p>I don&#x27;t know why the meme of AGI being not well defined has had such success over the past few years.
        • bonplan239 hours ago
          &quot;Someone&quot; literally did that (+&#x2F;- 2 years): <a href="https:&#x2F;&#x2F;link.springer.com&#x2F;book&#x2F;10.1007&#x2F;978-3-540-68677-4" rel="nofollow">https:&#x2F;&#x2F;link.springer.com&#x2F;book&#x2F;10.1007&#x2F;978-3-540-68677-4</a><p>I think it was supposed to be a more useful term than the earlier and more common &quot;Strong AI&quot;. With regards to strong AI, there was a widely accepted definition - i.e. passing the Turing Test - and we are way past that point already: ( see <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2503.23674" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2503.23674</a> )
          • erfgh5 hours ago
            I have to challenge the paper authors&#x27; understanding of the Turing test. For an AI system to pass the Turing test its output needs to be indistinguishable from a human&#x27;s. In other words, the rate of picking the AI system as human should be equal to the rate of picking the human. If in an experiment the AI system is picked at a rate higher than 50% it does not pass the Turing test (as the authors seem to believe) because another human can use this knowledge to conclude that the system being picked is not really human.<p>Also, I would go one step further and claim that to pass the Turing test an AI system should be indistinguishable from a human when judged by people <i>trained</i> in making such a distinction. I doubt that they used such people in the experiment.<p>I doubt that any AI system available today, or in the foreseeable future, can pass the test as I qualify it above.
            • CamperBob24 hours ago
              People are constantly being fooled by bots in forums like Reddit and this one. That&#x27;s good enough for me to consider the Turing test passed.<p>It also makes me consider it an inadequate test to begin with, since all classes of humans including domain experts can be fooled and have been in the past. The Turing test has always said more about the human participants than the machine.
        • Closi11 hours ago
          Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.<p>Surely the &#x27;General Intelligence&#x27; definition has to be consistent between &#x27;Artificial General Intelligence&#x27; and &#x27;Human General Intelligence&#x27;, and humans can be generally intelligent even if they can&#x27;t solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).
          • fc417fc8029 hours ago
            I do consider dogs to have &quot;general intelligence&quot; however despite that I have always (my entire life) considered AGI to imply human level intelligence. Not better, not worse, just human level.<p>It gets worse though. While one could claim that scoring equivalently on some benchmark indicates performance at the same level - and I&#x27;d likely agree - that&#x27;s not what I take AGI to mean. Rather I take it to mean &quot;equivalent to a human&quot; so if it utterly fails at something we&#x27;re good at such as driving a car through a construction zone during rush hour then I don&#x27;t consider it to have met the bar of AGI even if it meets or exceeds us at other unrelated tasks. You have to be <i>at least as general as a stock human</i> to qualify as AGI in my books.<p>Now I may be but a single datapoint but I think there are a lot of people out there who feel similarly. You can see this a lot in popular culture with AGI (or often AI) being used to refer to autonomous humanoid robots portrayed as operating at or above a human level.<p>Related to all that, since you mention protein folding. I consider that to be a form of super intelligence as it is more or less inconceivable that an unaided human would ever be able to accomplish such a feat. So I consider alphafold to be both super intelligent and decidedly _not_ AGI. Make of that what you will.
            • docjay2 hours ago
              Pop culture has spent its entire existence conflating AGI and ‘Physical AI’, so much so that the collective realization that they’re entirely different is a relatively recent thing. Both of them were so far off in the future that the distinction wasn’t worth considering, until suddenly one of them is kinda maybe sorta roughly here now…ish.<p>Artificial General <i>Intelligence</i> says nothing about physical ability, but movies with the ‘intelligence’ part typically match it with equally futuristic biomechanics to make the movie more interesting. AGI = Skynet, Physical AI = Terminator. The latter will likely be the hardest part, not only because it requires the former first, but because you can’t just throw more watts at a stepper motor and get a ballet dancer.<p>That said, I’m confident that if I could throw zero noise and precise “human sensory” level sensor data at any of the top LLM models, and their output was equally coupled to a human arm with the same sensory feedback, that it would definitely outdo any current self-driving car implementation. The physical connection is the issue, and will be for a long time.
              • fc417fc8021 hour ago
                Agreed about the conflation. But that drives home that there isn&#x27;t some historic commonly and widely accepted definition for AGI whose goal posts are being moved. What there was doesn&#x27;t match the new developments and was also often quite flawed to begin with.<p>&gt; LLM models, ... outdo any current self-driving car<p>How would an LLM handle computer vision? Are you implicitly including a second embedding model there? But I think that&#x27;s still the wrong sort of vision data for precise control, at least in general.<p>How do you propose to handle the model hallucinating? What about losing its train of thought?
            • Closi2 hours ago
              I think your definition of it being &#x27;human level&#x27; is sensible - definitely a lower bar to hit than &#x27;as long as people can do work that a robot cannot do, we don&#x27;t have AGI&#x27;.<p>There is certainly a lot road between current technology and driving a car through a construction zone during rush hour, particularly with the same amount of driving practice a human gets.<p>Personally I think there could be an AGI which couldn&#x27;t drive a car, but has genuine sentience - an awareness of being alive, although not necessarily the exact human experience. Maybe this isn&#x27;t AGI, which more implies problem-solving and thinking rather than sentience, but in my gut if we got something sentient but that couldn&#x27;t drive a car, we would still be there if that makes sense?
              • fc417fc8021 hour ago
                In theory I see what you&#x27;re saying. There are physical things an octopus could conceivably do that I never could on account of our physiology rather than our intelligence. So you can contrive an analogous scenario involving only the mind where something that is clearly an AGI is incapable of some specific task and thus falls short of my definition. This makes it clear that my definition is a heuristic rather than rigorous.<p>Nonetheless, it&#x27;s difficult to imagine a scenario where something that is genuinely human level can&#x27;t adapt in the field to a novel task such as driving a car. That sort of broad adaptability is exactly what the &quot;general&quot; in AGI is attempting to capture (imo).
      • sixtyj9 hours ago
        Charles Stross published Accelerando in 2005.<p>The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.
    • phatfish11 hours ago
      I want to know what the &quot;intelligence explosion&quot; is, sounds much cooler than AGI.
      • adammarples11 hours ago
        When AI gets so good it can improve on itself
        • peheje9 hours ago
          Actually, this has already happened in a very literal way. Back in 2022, Google DeepMind used an AI called AlphaTensor to &quot;play&quot; a game where the goal was to find a faster way to multiply matrices, the fundamental math that powers all AI.<p>To understand how big this is, you have to look at the numbers:<p>The Naive Method: This is what most people learn in school. To multiply two 4x4 matrices, you need 64 multiplications.<p>The Human Record (1969): For over 50 years, the &quot;gold standard&quot; was Strassen’s algorithm, which used a clever trick to get it down to 49 multiplications.<p>The AI Discovery (2022): AlphaTensor beat the human record by finding a way to do it in just 47 steps.<p>The real &quot;intelligence explosion&quot; feedback loop happened even more recently with AlphaEvolve (2025). While the 2022 discovery only worked for specific &quot;finite field&quot; math (mostly used in cryptography), AlphaEvolve used Gemini to find a shortcut (48 steps) that works for the standard complex numbers AI actually uses for training.<p>Because matrix multiplication accounts for the vast majority of the work an AI does, Google used these AI-discovered shortcuts to optimize the kernels in Gemini itself.<p>It’s a literal cycle: the AI found a way to rewrite its own fundamental math to be more efficient, which then makes the next generation of AI faster and cheaper to build.<p><a href="https:&#x2F;&#x2F;deepmind.google&#x2F;blog&#x2F;discovering-novel-algorithms-with-alphatensor&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deepmind.google&#x2F;blog&#x2F;discovering-novel-algorithms-wi...</a> <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;singularity&#x2F;comments&#x2F;1knem3r&#x2F;i_dont_think_people_realize_just_how_insane_the&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;singularity&#x2F;comments&#x2F;1knem3r&#x2F;i_dont...</a>
          • adammarples7 hours ago
            This is obviously cool, and I don&#x27;t want to take away from that, but using a shortcut to make training a bit faster is qualitatively different from producing an AI which is actually more intelligent. The more intelligent AI can recursively produce a more intelligent one and so on, hence the explosion. If it&#x27;s a bit faster to train but the same result then no explosion. It may be that finding efficiencies in our equations is low hanging fruit, but developing fundamentally better equations will prove impossible.
        • lo_zamoyski7 hours ago
          s&#x2F;improve itself&#x2F;explode itself&#x2F;
    • Hamuko12 hours ago
      I have noticed that Claude users seem to be about as intelligent as Claude itself, and wouldn&#x27;t be able to surpass its output.
      • noduerme8 hours ago
        This made me laugh. Unfortunately, this is the world we live in. Most people who drive cars have no idea how they work, or how to fix them. And people who get on airplanes aren&#x27;t able to flap their arms and fly.<p>Which means that humans are reduced to a sort of uselessness &#x2F; helplessness, using tools they don&#x27;t understand.<p>Overall, no one tells Uncle Bob that he doesn&#x27;t deserve to fly home to Minnesota for Christmas because he didn&#x27;t build the aircraft himself.<p>But we all think it.
      • baq8 hours ago
        You seem to be very confused about what intelligence even is.
      • fragmede9 hours ago
        You, of course, are smarter than them.
  • bfeynman3 hours ago
    lots of highfalutin language trying to make something thats pretty hand wavy look like it&#x27;s not. Where are the benchmarks? The &quot;vector algebra&quot; framing with @X + @Y - @Z is a falsehood. Embedding spaces don&#x27;t form any meaningful algebraic structure (ring, field, etc.) over semantic concepts, you&#x27;re just getting lucky by residual effects.
    • Xyra2 hours ago
      I&#x27;m giving you, the user, the easiest ability you&#x27;ve most likely ever had to explore embedding space yourself. Embeddings are tricky and can mislead, but they do often compose surprisingly intuitively, especially when you&#x27;ve played and built up a bit of an intuition for it.
      • edmundsauto2 hours ago
        What is the impact of misleading embeddings, how do they compose? I honestly am interested but don&#x27;t know enough to understand what you&#x27;re saying.<p>Why would I want to explore the embedding space myself, isn&#x27;t this a tool where I can run cross-data exploratory analyses against unstructured data, where it&#x27;s pre-populated with content?