Prism

(openai.com)

741 points by meetpateltech1 day ago

111 comments

  • Perseids12 hours ago
    I&#x27;m dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46787165">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46787165</a>
    • teddyh5 minutes ago
      We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will <i>unlearn</i> the older association, thereby hiding the Snowden revelations.
    • ZpJuUuNaQ510 hours ago
      &gt;I&#x27;m dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out<p>I just think it&#x27;s silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it &quot;Polyhedron&quot;?
      • jll298 hours ago
        What the OP was talking about is the negative connotation that goes with the word; it&#x27;s certainly a poor choice from a marketing point of view.<p>You may say it&#x27;s &quot;silly to obsess&quot;, but it&#x27;s like naming a product &quot;Auschwitz&quot; and saying &quot;it&#x27;s just a city name&quot; -- it ignores the power of what Geffrey N. Leech called &quot;associative meaning&quot; in his taxonomy of &quot;Seven Types of Meaning&quot; (Semantics, 2nd. ed. 1989): speaking that city&#x27;s name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.
        • ZpJuUuNaQ57 hours ago
          Well, I don&#x27;t know anything about marketing and you might have a point, but the severity of impact of these two words is clearly very different, so it doesn&#x27;t look like a good comparison to me. It would raise quite a few eyebrows and more if, for example, someone released a Linux distro named &quot;Auschwitz OS&quot;, meanwhile, even in the software world, there are multiple products that incorporate the word prism in various ways[1][2][3][4][5][6][7][8][9]. I don&#x27;t believe that an average user encountering the word &quot;prism&quot; immediately starts thinking about NSA surveillance program.<p>[1] <a href="https:&#x2F;&#x2F;www.prisma.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.prisma.io&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;prism-pipeline.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;prism-pipeline.com&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;prismppm.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;prismppm.com&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;prismlibrary.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;prismlibrary.com&#x2F;</a><p>[5] <a href="https:&#x2F;&#x2F;3dprism.eu&#x2F;en&#x2F;" rel="nofollow">https:&#x2F;&#x2F;3dprism.eu&#x2F;en&#x2F;</a><p>[6] <a href="https:&#x2F;&#x2F;www.graphpad.com&#x2F;features" rel="nofollow">https:&#x2F;&#x2F;www.graphpad.com&#x2F;features</a><p>[7] <a href="https:&#x2F;&#x2F;www.prismsoftware.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.prismsoftware.com&#x2F;</a><p>[8] <a href="https:&#x2F;&#x2F;prismlive.com&#x2F;en_us&#x2F;" rel="nofollow">https:&#x2F;&#x2F;prismlive.com&#x2F;en_us&#x2F;</a><p>[9] <a href="https:&#x2F;&#x2F;github.com&#x2F;Project-Prism&#x2F;Prism-OS" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Project-Prism&#x2F;Prism-OS</a>
          • vladms7 hours ago
            I think the ideas was to try to explain why is a problem to choose something, it is not a comparison of the intensity &#x2F; importance.<p>I am not sure you can make an argument of &quot;other people are doing it too&quot;. Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).<p>As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.
            • adammarples4 hours ago
              But without the extremity of the Auschwitz example, it suddenly is not a problem. Prism is an unbelievably generic word and I had not even heard of the Snowdon one until now nor would I remember it if I had. Prism is one step away from &quot;Triangle&quot; in terms of how generic it is.
              • order-matters3 hours ago
                1 more perspective to add: while i did not know the NSA program was called prism, it did give me pause to find out in this thread. OpenAI surely knows what it was called, at least they should. So it begs the question of why.<p>If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in
          • helsinkiandrew7 hours ago
            And of course The prism<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Prism_(optics)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Prism_(optics)</a><p>I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.
        • 9467899876497 hours ago
          Do a lot of people know that Prism is the name of the program? I certainly didn&#x27;t and consider myself fairly switched on in general
          • BlueTemplar6 hours ago
            It&#x27;s likely to be an age thing too. Were you in hacker-related spaces when the Snowden scandal happened ?<p>(I expect a much higher than average share of people in academia also part of these spaces.)
        • andrewinardeer1 hour ago
          We had a local child day care provider call themselves ISIS. That was blast.
          • SoftTalker19 minutes ago
            We had a local siding company call themselves &quot;The Vinyl Solution&quot; some people are just tone-deaf.
        • FrustratedMonky2 hours ago
          I think point is that on the sliding scale of words that are no longer allowed to use, &quot;Prism&quot; does not reach the level of &quot;Auschwitz&quot;.<p>Most people don&#x27;t even remember Snowden at this point.
      • black_puppydog5 hours ago
        I have to say I had the same reaction. Sure, &quot;prism&quot; shows up in many contexts. But here it shows up in the context of a company and product that is <i>already</i> constantly in the news for its lackluster regard for other people&#x27;s expectation of privacy, copyright, and generally trying to &quot;collect it all&quot; as it were, and that, as GP mentioned, in an international context that doesn&#x27;t put these efforts in the best light.<p>They&#x27;re of course free to choose this name. I&#x27;m just also surprised they would do so.
      • jimbokun2 hours ago
        But the contexts are closely related.<p>Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.
      • mayhemducks1 hour ago
        You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.
      • mc324 hours ago
        Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.
      • bergheim5 hours ago
        Sure. Like Goebbels. Because they gobble things up.<p>Altso, nazism. But different context, years ago, so whatever I guess?<p>Hell, let&#x27;s just call it Hitler. Different context!<p>Given what they do it is an insidious name. Words matter.
        • fortyseven2 hours ago
          Comparing words with unique widespread notoriety with a simple, everyday one. Try again.
          • rvnx1 hour ago
            Prism in tech is very well-known to be a surveillance program.<p>Coming from a company involved with sharing data to intelligence services (it&#x27;s the law you can&#x27;t escape it) this is not wise at all. Unless nobody in OpenAI heard of it.<p>It was one of the biggest scandal in tech 10 years ago.<p>They could call it &quot;Workspace&quot;. More clear, more useful, no need to use a code-word, that would have been fine for internal use.
        • ZpJuUuNaQ54 hours ago
          So you have to resort to the most extreme examples in order to make it a problem? Do you also think of Hitler when you encounter a word &quot;vegetarian&quot;?
          • collingreen3 hours ago
            Is that what you think hitler was very famous for?<p>The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed &#x2F; less niche example.<p>OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.<p>Hitler vegetarians isn&#x27;t a great analogy because vegetarianism isn&#x27;t related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called &quot;Oilspill&quot; or Dupont making a nail polish called &quot;Forever Chem&quot;.<p>They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.
          • gegtik3 hours ago
            huh.. seems like a head-scratcher why it would relevant to this argument to select objectionable words instead of benign, inert words.
    • sunaookami11 hours ago
      &gt;Has the technical and scientific community in the US already forgotten this huge breach of trust?<p>Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won&#x27;t bite the hand that feeds them (employees of companies willing to breach their users trust).<p>Edit: see my comment here in a snowden thread: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46237098">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46237098</a>
      • jll298 hours ago
        What Snowden did was heroic. What was shameful was the world&#x27;s underwhelming reaction. Where were all these images in the media of protest marches like against the Vietnam war?<p>Someone once said &quot;Religion is opium for the people.&quot; - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn&#x27;t noticed if their own children didn&#x27;t come home from school.
        • vladms6 hours ago
          Looking back I think allowing more centralized control to various forms of media to private parties did much worse overall than government surveillance on the long run.<p>For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).
        • linkregister2 hours ago
          Protests in 2025 alone have outnumbered that of those during the Vietnam War.<p>Protesting is a poor proxy for American political engagement.<p>Child neglect and missing children rates are lower than they were 50 years ago.
      • linkregister2 hours ago
        Are you asserting that disagrees with you is either a propaganda campaign or a cynical insider? Nobody who opposes you has a truly held belief?
      • TiredOfLife10 hours ago
        Him being (or best case becoming) a russian asset turned out to be true
        • omnimus10 hours ago
          Like it would matter for any of the revelations. And like he would have other choices to not go to prison. Look at how it worked out for Assange.
          • jll298 hours ago
            They both undertook something they believed in, and showed extreme courage.<p>And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.<p>Idealism is that you follow your principles despite that price, not escaping&#x2F;evading the consequences.
          • BlueTemplar6 hours ago
            Assange became a Russian asset *while* in a whistleblowing-related job.<p>(And he is also the reason why Snowden ended up in Russia. Though it&#x27;s possible that the flight plan they had was still the best one in that situation.)
            • Matl5 hours ago
              So exposing corruption of Western governments is not worthwhile because it &#x27;helps&#x27; Russia? Aha, got it.<p>I am increasingly wondering what there remains of the supposed superiority of the Western system if we&#x27;re willing to compromise on everything to suit our political ends.<p>The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.<p>In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we&#x27;re better.
            • observationist24 minutes ago
              Obama and Biden chased him into a corner. They actually bragged about chasing him into Russia, because it was a convenient narrative to smear Snowden with after the fact.<p>It was Russia, or vanish into a black site, never to be seen or heard from again.
        • lionkor8 hours ago
          If the messenger has anything to do with Russia, even after the fact, we should dismiss the message and remember to never look up.
        • vezycash9 hours ago
          Truth is truth, no matter the source.
          • TiredOfLife7 hours ago
            Whole Truth is truth.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lie#:~:text=citation%20needed%5D-,Lying%20by%20omission,-%2C%20also%20known%20as" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lie#:~:text=citation%20needed%...</a>
            • rvnx1 hour ago
              There is also the truth that you say, and the truth that you feel
        • jimmydoe8 hours ago
          He could have been a Chinese asset, but CCP is a coward.
    • pageandrew12 hours ago
      These things don&#x27;t really seem related at all. Its a pretty generic term.
      • Phelinofist11 hours ago
        FWIW, my immediate reaction was the same &quot;That reminds me of NSA PRISM&quot;
        • addandsubtract7 hours ago
          It reminded me of the code highlighter[0], and the ORM Prisma[1].<p>[0] <a href="https:&#x2F;&#x2F;prismjs.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;prismjs.com&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;www.prisma.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.prisma.io&#x2F;</a>
        • wmeredith4 hours ago
          It reminded me of the album cover to Dark Side of The Moon by Pink Floyd.
        • karmakurtisaani10 hours ago
          Same here.
          • 3form7 hours ago
            Same, to the point where I was wondering if someone deliberately named it so. But I expect that whoever made this decision simply doesn&#x27;t know or care.
      • kakacik9 hours ago
        I came here based to headline expecting some more cia &amp; nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)
      • vaylian11 hours ago
        And yet, the name immediately reminded me of the Snowden relevations.
      • ImHereToVote12 hours ago
        They are farming scientists for insight.
    • JasonADrury12 hours ago
      This comment might make more sense if there was some connection or similarity between the OpenAI &quot;Prism&quot; product and the NSA surveillance program. There doesn&#x27;t appear to be.
      • Schlagbohrer12 hours ago
        Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.
        • concats11 hours ago
          Isn&#x27;t most research and scientific data is already shared openly (in publications usually)?
        • isege11 hours ago
          This comment allows ycombinator to steal ideas from their user&#x27;s comments, using their huge mass news platform. Temendous overlap indeed.
    • WiSaGaN6 hours ago
      OpenAI has a former NSA director on its board. [1] This connection makes the dilution of the term &quot;PRISM&quot; in search results a potential benefit to NSA interests.<p>[1]: <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;openai-appoints-retired-us-army-general&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;openai-appoints-retired-us-army-gen...</a>
    • observationist27 minutes ago
      I think it&#x27;s probably just apparent to a small set of people; we&#x27;re usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it&#x27;s a science project and nothing to do with surveillance and user data, I think it&#x27;s fine.<p>If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it&#x27;s fine.
    • wmeredith4 hours ago
      I get what you&#x27;re saying, but that was 13 years ago. How long before the branding statute of limitations runs out on usage for a simple noun?
    • johanyc58 minutes ago
      I did not make the association at all
    • yayitswei4 hours ago
      Fwiw I was going to make the same comment about the naming, but you beat me to it.
    • saidnooneever10 hours ago
      tons of things are called prism.<p>(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is &#x27;normal&#x27; - 2012 shows us no one gives a shit)
    • bandrami12 hours ago
      I mean it&#x27;s also the name of the national engineering education journal and a few other things. There&#x27;s only 14,000 5-letter words in English so you&#x27;re going to have collisions.
    • CalRobert5 hours ago
      Do they care what anyone over 30 thinks?
    • LordDragonfang1 hour ago
      Probably gonna get buried at the bottom of this thread, but:<p>There&#x27;s a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the &quot;weird&quot; state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there&#x27;s some general bias toward those concepts in the weights.
    • lrvick6 hours ago
      Considering OpenAI is deeply rooted in anti-freedom ethos and surveillance capitalism, I think it is quite a self aware and fitting name.
    • igleria5 hours ago
      money is a powerful amnesiac
    • chromanoid9 hours ago
      Sorry, did you read this <a href="https:&#x2F;&#x2F;blog.cleancoder.com&#x2F;uncle-bob&#x2F;2018&#x2F;12&#x2F;14&#x2F;SJWJS.html" rel="nofollow">https:&#x2F;&#x2F;blog.cleancoder.com&#x2F;uncle-bob&#x2F;2018&#x2F;12&#x2F;14&#x2F;SJWJS.html</a>?<p>I personally associate Prism with [Silverlight - Composite Web Apps With Prism](<a href="https:&#x2F;&#x2F;learn.microsoft.com&#x2F;en-us&#x2F;archive&#x2F;msdn-magazine&#x2F;2009&#x2F;july&#x2F;composing-applications-with-silverlight-and-prism" rel="nofollow">https:&#x2F;&#x2F;learn.microsoft.com&#x2F;en-us&#x2F;archive&#x2F;msdn-magazine&#x2F;2009...</a>) due to personal reasons I don&#x27;t want to talk about ;))
    • aa-jv10 hours ago
      &gt;Has the technical and scientific community in the US already forgotten this huge breach of trust?<p>Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.<p>The agitprop against Snowden as a &quot;Russian agent&quot; has successfully occluded the <i>actual</i> scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is <i>still in wide use</i>.<p>Autocrats&#x27; general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America&#x27;s repressive military industrial complex, they kill the messenger.
    • alfiedotwtf11 hours ago
      &gt; Has the technical and scientific community in the US already forgotten this huge breach of trust?<p>We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill
    • alexpadula8 hours ago
      That’s funny af
    • aargh_aargh12 hours ago
      I still can&#x27;t get over the Apple thing. Haven&#x27;t enjoyed a ripe McIntosh since. &lt;&#x2F;s&gt;
  • vitalnodo1 day ago
    Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.<p>On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.<p>[0] <a href="https:&#x2F;&#x2F;crixet.com" rel="nofollow">https:&#x2F;&#x2F;crixet.com</a><p>[1] <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Crixet&#x2F;comments&#x2F;1ptj9k9&#x2F;comment&#x2F;nvhl1gz&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Crixet&#x2F;comments&#x2F;1ptj9k9&#x2F;comment&#x2F;nvh...</a><p>[2] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42009254">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42009254</a><p>[3] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46394937">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46394937</a>
    • crazygringo23 hours ago
      I&#x27;m curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I&#x27;m simply curious if this is a viable Overleaf competitor -- especially since it&#x27;s free.<p>I do self-host Overleaf which is annoying but ultimately doable if you don&#x27;t want to pay the $21&#x2F;mo (!).<p>I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it&#x27;s only a fraction of a drop in the bucket compared to OpenAI&#x27;s total compute needs. But I&#x27;m hesitant to use it because I&#x27;m not convinced it&#x27;ll still be around in a couple of years.
      • efficax23 hours ago
        Overleaf is a little curious to me. What&#x27;s the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I&#x27;ve found it effective at fixing up layouts for me.
        • radioactivist22 hours ago
          In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes&#x2F;comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
        • bhadass22 hours ago
          collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.<p>a lot of academics aren&#x27;t super technical and don&#x27;t want to deal with git workflows or syncing local environments. they just want to write their fuckin&#x27; paper (WTFP).<p>overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.<p>also nice for quick edits from any machine without setting anything up. the &quot;just install it locally&quot; advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.
          • joker6662 hours ago
            I am curious if Git + Local install can solve this collaboration issue with Pull Requests?
        • jdranczewski21 hours ago
          To add to the points raised by others, &quot;just install LaTeX&quot; is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that &quot;just works&quot; to figuring out what MiKTeX is.
        • crazygringo22 hours ago
          I can code in monospace (of course) but I just can&#x27;t write in monospace markup. I need something approaching WYSIWIG. It&#x27;s just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.<p>The visual editor in Overleaf isn&#x27;t true WYSIWIG, but it&#x27;s close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.<p>(And that&#x27;s just for solo usage -- it&#x27;s really the collaborative stuff that turns into a game-changer.)
          • gmac12 hours ago
            Same for me. I wrote my PhD in LyX for that reason.
          • withinboredom12 hours ago
            I use inkdrop for this, then pandoc to go from markdown to latex, then a final typesetting pass. Inkdrop is great for WYSIWYG markdown editing.
        • baby16 hours ago
          Latex is such a nightmare to work with locally
        • MuteXR8 hours ago
          &quot;Just install LaTeX&quot; is really not a valid response when the LaTeX toolchain is a genuine nightmare to work with. I could do it but still use Overleaf. Managing that locally is just not worth it.
        • spacebuffer19 hours ago
          I&#x27;d use git in this case, I am sure there are other reasons to use overleaf otherwise it wouldn&#x27;t exist but this seems like a solved issue with git.
          • jll297 hours ago
            You can use actually git (it&#x27;s also integrated in Overleaf).<p>You can even export ZIP files if you like (for any cloud service, it&#x27;s not a bad idea to clone your repo once in a while to avoid begin stuck in case of unlikely downtime).<p>I have both a hosted instance (thanks to Overleaf&#x2F;ShareLaTeX Ltd.) and I&#x27;m also paying user for the pro group license (&gt;500€&#x2F;year) for my research team. It&#x27;s great - esp. for smaller research teams - to have the maintenance outsourced to a commercial provider.<p>On a good day, I&#x27;d spend 40% in Overleaf, 10% in Sublime&#x2F;Emacs, 20% in Email and 10% in Google Scholar&#x2F;Semantics Scholar and 10% in EasyChair&#x2F;OpenReview, the rest in meetings.
          • universa112 hours ago
            you can use git with overleaf, but from practical experience: getting even &quot;mathematically&#x2F;technically inclined&quot; people to consistently use git takes a lot of time... which one could spend on other more fun things :-)
        • 3form10 hours ago
          LaTeX ecosystem is a UX nightmare, coming from someone who had to deal with it recently. Overleaf just works.
        • warkdarrior22 hours ago
          Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.<p>Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.
        • lou13068 hours ago
          The first three things are, in this order: collaborative editing, collaborative editing, collaborative editing. Seriously, this cannot be understated.<p>Then: The LaTeX distribution is always up-to-date; you can run it on limited resources; it has an endless supply of conference and journal templates (so you don&#x27;t have to scavenge them yourself off a random conference&#x2F;publisher website); Git backend means a) you can work offline and b) version control comes in for free. These just off the top of my head.
    • vicapow23 hours ago
      The deeper I got, the more I realized <i>really</i> supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn&#x27;t working with WASM because of resource limits), etc.
      • storystarling7 hours ago
        The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.
      • seazoning22 hours ago
        We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it<p>Any plans of having typst integrated anytime soon?
        • vicapow21 hours ago
          I&#x27;m not against typst. I think it&#x27;s integration would be a lot easier and more straightforward I just don&#x27;t know if it&#x27;s really that popular yet in academia.
          • gunalx13 hours ago
            its not yet, but gaining traction.
      • BlueTemplar17 hours ago
        But what&#x27;s the point ?<p>To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?<p>Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?
        • jll297 hours ago
          You are right in pointing out that the Web browser isn&#x27;t the most suitable UI paradigm for highly interactive applications like a scientific typesetting system&#x2F;text editor.<p>I have occasionally lost a paragraph just by accidental marking a few lines and pressing [Backspace].<p>But at the moment, there is no better option than Overleaf, and while I encourage you to write what you propose if you can, Overleaf will be the bar that any such system needs to be compared against.
          • BlueTemplar5 hours ago
            OP is talking about developing an alternative to Overleaf. But they are still trying to do it inside a browser !
    • regenschutz4 hours ago
      I was using Crixet before I switched over to Typst[0] for all of my writing. However, back when I did use Crixet, I never used its AI features. It was just a much better alternative to Overleaf for me. Sad to see that AI will be forced on all Crixet users now.<p>[0]: <a href="https:&#x2F;&#x2F;typst.app" rel="nofollow">https:&#x2F;&#x2F;typst.app</a>
    • swyx20 hours ago
      we did a podcast with the Crixet founder and Kevin Weil of OAI on the process: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=W2cBTVr8nxU&amp;pp=2Aa0Bg%3D%3D" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=W2cBTVr8nxU&amp;pp=2Aa0Bg%3D%3D</a>
      • vicapow15 hours ago
        thanks for hosting us on the pod!
    • songodongo1 day ago
      So this is the product of an acquisition?
      • vitalnodo23 hours ago
        &gt; Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.<p>They’re quite open about Prism being built on top of Crixet.
    • doctorpangloss21 hours ago
      It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!
      • eloisant5 hours ago
        This is just because LaTeX is widely used by researchers.<p>Also yes, LaTeX being source code it&#x27;s much easier to get an AI to genere LaTeX than integrate into MS Word.
      • y1n017 hours ago
        Please refrain from incorporating em dashes into your LaTeX document. In summary, the absence of em dashes in LaTeX.
      • amitav121 hours ago
        Am I missing something? LaTeX is associated with slop now?
        • nemomarx19 hours ago
          If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?
          • jasonfarnon19 hours ago
            don&#x27;t think so. I think latex was one of academics&#x27; earlier use cases of chatgpt, back in 2023. That&#x27;s when I started noticing tables in every submitted paper looking way more sophisticated than they ever did. (The other early use case of course being grammar&#x2F;spelling. Overnight everyone got fluent and typos disappeared.)
            • jmdaly17 hours ago
              It&#x27;s funny, I was reading a bunch of recent papers not long ago (I haven&#x27;t been in academia in over a decade) and I was really impressed with the quality of the writing in most of them. I guess in some cases LLMs are the reason for that!
              • jll297 hours ago
                I recently got wrongly accused of using LLMs to help write an article by a reviewer. He complained that our (my and my co-worker&#x27;s) use of &quot;to foster&quot; read &quot;like it was created by ChatGPT&quot;. (If our paper was fluent&#x2F;eloquent, that&#x27;s perhaps because having an M.A. in Eng. lit. helped for that.)<p>I don&#x27;t think any particular word alone can be used as an indicator for LLM use, although certain formatting cues are good signals (dashes, smileys, response structure).<p>We were offended, but kept quiet to get the article accepted, and we changed some instances of some words to appease them (which thankfully worked). But the wrong accusation left a bit of a bad aftertaste...
              • trentnelson15 hours ago
                If you’ve got an existing paragraph written that you just <i>know</i> could be rephrased more eloquently, and can describe the type of rephrasing&#x2F;restructuring you want… LLMs absolutely slap at that.
          • MITSardine18 hours ago
            LaTeX is already standard in fields that have math notation, perhaps others as well. I guess the promise is that &quot;formatting is automatic&quot; (asterisk), so its popularity probably extends beyond math-heavy disciplines.
          • x-complexity18 hours ago
            &gt; Right now latex would be a high indicator of manual effort, right?<p>...no?<p>Just one Google search for &quot;latex editor&quot; showed more than 2 in the first page.<p><a href="https:&#x2F;&#x2F;www.overleaf.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.overleaf.com&#x2F;</a><p><a href="https:&#x2F;&#x2F;www.texpage.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.texpage.com&#x2F;</a><p>It&#x27;s not that different from using a markdown editor.
  • i2km13 hours ago
    This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it&#x27;s going to be a DDoS attack on a system which didn&#x27;t even handle the load before LLMs.<p>Maybe we&#x27;ll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
    • thomasahle9 hours ago
      I tried Prism, but it&#x27;s actually a lot more work than just using claude code. The latter allows you to &quot;vibe code&quot; your paper with no manual interaction, while Prism actually requires you review every change.<p>I actually think Prism promotes a much more responsible approach to AI writing than &quot;copying from chatgpt&quot; or the likes.
    • haspok11 hours ago
      &gt; This is going to be the concrete block which finally breaks the back of the academic peer review system<p>Exactly, and I think this is good news. Let&#x27;s break it so we can fix at last. Nothing will happen until a real crisis emerges.
      • suddenlybananas7 hours ago
        There&#x27;s problems with the medical system, therefore we should set hospitals on fire to motivate them to make them better.
      • port119 hours ago
        Disrupting a system without good proposals for its replacement sounds like a recipe for disaster.
        • butlike4 hours ago
          Reign of terror <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Reign_of_Terror" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Reign_of_Terror</a>
    • aembleton10 hours ago
      Maybe Open AI will sell you &#x27;Lens&#x27; which will assist with sorting through the submissions and narrow down the papers worth reviewing.
    • jltsiren6 hours ago
      Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won&#x27;t even look at a new paper, unless it&#x27;s vouched for by someone &#x2F; published in a venue they trust.
    • make312 hours ago
      Overleaf basically already has the same thing
    • csomar7 hours ago
      That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren&#x27;t that many desperate people who will do it en-masse and for peanuts.
    • boxed10 hours ago
      Handwriting is super easy to fake with plotters.
      • eternauta3k6 hours ago
        Is there something out there to simulate the non-uniformity and errors of real handwriting?
    • 4gotunameagain12 hours ago
      &gt; i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...<p>And you think the indians will not hand write the output of LLMs ?<p>Not that I have a better suggestion myself..
  • tarcon9 hours ago
    This is a actual prompt in the video: &quot;What are the papers in the literature that are most relevant to this draft and that I should consider citing?&quot;<p>They probably wanted: &quot;... that I should read?&quot; So that this is at least marketed to be more than a fake-paper generation tool.
    • mFixman9 hours ago
      You can tell that they consulted 0 scientists to verify the clearly AI-written draft of this video.<p>The target audience of this tool is not academics; it&#x27;s OpenAI investors.
    • jtr13 hours ago
      At last, our scientific literature can turn to its true purpose: mapping the entire space of arguable positions (and then some)
    • floitsch6 hours ago
      I felt the same, but then thought of experts in their field. For example, my PhD advisor would already know all these papers. For him the prompt would actually be similar to what was shown in the video.
  • syntex20 hours ago
    The Post-LLM World: Fighting Digital Garbage <a href="https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;paper_20260127&#x2F;mode&#x2F;2up" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;paper_20260127&#x2F;mode&#x2F;2up</a><p>Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse<p>a little joke on Prism)
    • Springtime17 hours ago
      <i>&gt; The Post-LLM World: Fighting Digital Garbage <a href="https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;paper_20260127&#x2F;mode&#x2F;2up" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;paper_20260127&#x2F;mode&#x2F;2up</a></i><p>This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.
      • doodlesdev15 hours ago
        Which proves its own points! Absolutely genius! The cost asymmetry of producing and checking for garbage truly is becoming a problem in the recent years, with the advent of LLMs and generative AI in general.
        • parentheses13 hours ago
          Totally agree!<p>I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying&#x2F;reviewing.
          • dormento6 hours ago
            &gt; Totally agree!<p>Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else&#x27;s ideas. AI turned me into a cynical bastard :(
      • syntex10 hours ago
        Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.<p>Also, in a world where AI output is abundant, we humans become the scarce resource the &quot;tools&quot; in the system that provide some connectivity to reality (grounding) for LLM
    • mrbonner18 hours ago
      Plot twist: humans become the new Proof of Work consensus mechanism. Instead of GPUs burning electricity to hash blocks, we burn our sanity verifying whether that Medium article was written by a person or a particularly confident LLM.<p>&quot;Human Verification as a Service&quot;: finally, a lucrative career where the job description is literally &quot;read garbage all day and decide if it&#x27;s authentic garbage or synthetic garbage.&quot; LinkedIn influencers will pivot to calling themselves &quot;Organic Intelligence Validators&quot; and charge $500&#x2F;hr to squint at emails and go &quot;yeah, a human definitely wrote this passive-aggressive Slack message.&quot;<p>The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they&#x27;re still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became &quot;can confidently say whether another human was involved.&quot;<p>Bullish on verification miners. Bearish on whatever remains of our collective attention span.
      • kinduff18 hours ago
        Human CAPTCHA exists to figure out whether your clients are human or not, so you can segment them and apply human pricing. Synthetics, of course, fall into different tiers. The cheaper ones.
      • direwolf2018 hours ago
        Bullish on verifiers who accept money to verify fake things
  • JBorrow23 hours ago
    From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the &#x27;barrier to entry&#x27; for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to &#x27;boost&#x27; their CV (even having a &#x27;submitted&#x27; publication is seen as a benefit), which is really not the point of these journals at all.<p>I&#x27;m not sure I&#x27;m convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what&#x27;s been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
    • SchemaLoad21 hours ago
      GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to &quot;create&quot; it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.<p>I&#x27;m not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it&#x27;s own generated fake content.
      • cryzinger21 hours ago
        More relevant than ever:<p>&gt; The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Brandolini%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Brandolini%27s_law</a>
        • trees10119 hours ago
          The P≠NP conjecture in CS says checking a solution is easier than finding one. Verifying a Sudoku is fast; solving it from scratch is hard. But Brandolini&#x27;s Law says the opposite: refuting bullshit costs way more than producing it.<p>Not actually contradictory. Verification is cheap when there&#x27;s a spec to check against. &#x27;Valid Sudoku?&#x27; is mechanical. But &#x27;good paper?&#x27; has no spec. That&#x27;s judgment, not verification.
          • degamad14 hours ago
            &gt; The P≠NP conjecture in CS says checking a solution is easier than finding one...<p>... for NP-hard problems.<p>It says nothing about the difficulty of finding or checking solutions of polynomial (&quot;P&quot;) or exponential (&quot;EXPTIME&quot;) problems.
          • bwfan12317 hours ago
            producing BS can be equated to generating statements without caring for their truth value. Generating them is easy. Refuting them requires one to find a proof or a contradiction which is a lot of work, and is equal to &quot;solving&quot; the statement. As an analogy, refuting BS is like solving satisfiability, whereas generating BS is like generating propositions.
          • rspijker11 hours ago
            It&#x27;s not contradictory because solving and producing bullshit are very different things. Generating less than 81 random numbers between 1 and 9 is probably also cheaper than verifying correctness of a sudoku.
        • monkaiju20 hours ago
          Wow the 3 comments from OC to here are all bangers, they combine into a really nice argument against these toys
      • overfeed21 hours ago
        &gt; The effort to review this stuff is now massively more than the effort to &quot;create&quot; it<p>I don&#x27;t doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.<p>It appears LLM-parasitism isn&#x27;t close to being done, and keeps finding new commons to spoil.
        • fooker18 hours ago
          There are a dozen startups that do this.
      • wmeredith4 hours ago
        &gt; Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.<p>I&#x27;ve seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.
      • Spivak20 hours ago
        In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR <i>looks</i> good.
      • toomuchtodo20 hours ago
        <i>AI slop security reports submitted to curl</i> - <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;bagder&#x2F;07f7581f6e3d78ef37dfbfc81fd1d1cd" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;bagder&#x2F;07f7581f6e3d78ef37dfbfc81fd1d...</a><p><i>HN Search: curl AI slop</i> - <a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;query=curl%20AI%20slop&amp;sort=byDate&amp;type=story" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;que...</a>
        • Cornbilly19 hours ago
          This one is hilarious. <a href="https:&#x2F;&#x2F;hackerone.com&#x2F;reports&#x2F;3516186" rel="nofollow">https:&#x2F;&#x2F;hackerone.com&#x2F;reports&#x2F;3516186</a><p>If I submitted this, I&#x27;d have to punch myself in the face repeatedly.
          • toomuchtodo19 hours ago
            The great disappointment is that the humans submitting these just don’t care it’s slop and they’re wasting another human’s time. To them, it’s a slot machine you just keep cranking the arm of until coins come out. “Prompt until payout.”
    • InsideOutSanta22 hours ago
      I&#x27;m scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We&#x27;re truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it&#x27;s drowning out everything of value.
      • willturman21 hours ago
        In a corollary to Sturgeon&#x27;s Law, I&#x27;d propose Altman&#x27;s Law: &quot;In the Age of AI, 99.999...% of everything is crap&quot;
        • SimianSci21 hours ago
          Altman&#x27;s Law: 99% of all content is slop<p>I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn&#x27;t slop. At which point I assume we will have reinvented web search once more.<p>Has anyone looked at reviving PageRank?
          • _kb6 hours ago
            For images surely this is the next pivot for hot dog &#x2F; not hot dog.
          • Imustaskforhelp20 hours ago
            I mean Kagi is probably the PageRank revival we are talking about.<p>I have heard from people here that Kagi can help remove slop from searches so I guess yeah.<p>Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.<p>So Kagi &#x2F; DDG (Duckduckgo) yeah.
            • ectospheno14 hours ago
              I’ve been a Kagi subscriber for a while now. Recently picked up ChatGPT Business and now am considering dropping Kagi since I am only using it for trivial searches. Every comparison I’ve done with deep searches by hand and with AI ended up with the same results in far less time using AI.
            • jll2920 hours ago
              Does anyone have kept an eye of who uses what back-end?<p>DDG used to be meta-search on top of Yahoo, which doesn&#x27;t exist anymore. What do Gabriel and co-workers use now?
              • selectodude20 hours ago
                I think they all use Bing now.
              • direwolf2018 hours ago
                Kagi is mostly stealing results from Google and disenshittifying them but mixes in other engines like Yandex and Mojeek and Bing.<p>DDG is Bing.
      • techblueberry22 hours ago
        There&#x27;s this thing where all the thought leaders in software engineering ask &quot;What will change about building about building a business when code is free&quot; and while, there are some cool things, I&#x27;ve also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
        • wmeredith4 hours ago
          The value is in the same place: solving people&#x27;s problems.<p>Now that the code is cheaper (not free quite yet) skills further up the abstraction chain become more valuable.<p>Programming and design skills are less valuable. However, you still have to know what to build: product and UX skills are more valuable. You still have to know how to build it: software architect skills are more valuable.
        • jimbokun2 hours ago
          I think about this more and more when I see people online about their &quot;agents managing agents&quot; producing...something...24&#x2F;7&#x2F;365.<p>Very rarely is there anything about WHAT these agents are producing and why it&#x27;s important and valuable.
        • SequoiaHope20 hours ago
          To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.
      • jplusequalt22 hours ago
        Digital pollution.
      • jcranmer22 hours ago
        The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We&#x27;ve since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don&#x27;t immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
        • storystarling20 hours ago
          I run a small print-on-demand platform and this is exactly what we&#x27;re seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We&#x27;re pretty much back to manual review which destroys the unit economics.
          • direwolf2018 hours ago
            If it&#x27;s print-on-demand, why does it matter? Why shouldn&#x27;t you accept someone&#x27;s money to print slop for them?
            • wmeredith4 hours ago
              Some book houses print on demand for wide audiences. It&#x27;s not just for the author.
          • lupire18 hours ago
            Why would detecting AI be more expensive than creating it?
      • jll2920 hours ago
        Soon, poor people will talk to a LLM, rich people will get human medical care.
        • Spivak20 hours ago
          I mean I&#x27;m currently getting &quot;expensive&quot; medical care and the doctors are still all using AI scribes. I wouldn&#x27;t assume there would be a gap in anything other than perception. I imagine doctors that cater to the fuck you rich will just put more effort into hiding it.<p>No one, at all levels, wants to do notes.
          • golem1418 hours ago
            My experience has been that the transcriptions are way more detailed and correct when doctors use these scribes.<p>You could argue that not writing down everything provides a greater signal-noise ratio. Fair enough, but if something seemingly inconsequential is not noted and something is missed, that could worsen medical care.<p>I&#x27;m not sure how this affects malpractice claims - It&#x27;s now easier to prove (with notes) that the doc &quot;knew&quot; about some detail that would otherwise not have been note down.
    • jll2920 hours ago
      I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I&#x27;m co-organizing later this year.<p>So I was not amused about this announcement at all, however easy it may make my own life as an author (I&#x27;m pretty happy to do my own literature search, thank you very much).<p>Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.<p>OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge&#x2F;Oxford&#x2F;Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available&#x2F;have been integrated. Expect a flood of BS your way.<p>When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT&#x27;s...); I can&#x27;t do the same with anonymous authors.
      • MITSardine18 hours ago
        Speaking of conferences, might this not be the way to judge this work? You could imagine only orally defended work to be publishable, or at least have the prestige of vetting, in a bit of an old-school science revival.
        • Majromax3 hours ago
          Chicken and egg problem: since conferences have limited capacity, you need to pre-filter submissions to see who gets a presentation spot.
      • lupire18 hours ago
        Self-solving problem: AI oral exam administration: <a href="https:&#x2F;&#x2F;www.gatech.edu&#x2F;news&#x2F;2024&#x2F;09&#x2F;24&#x2F;ai-oral-assessment-tool-uses-socratic-method-test-students-knowledge" rel="nofollow">https:&#x2F;&#x2F;www.gatech.edu&#x2F;news&#x2F;2024&#x2F;09&#x2F;24&#x2F;ai-oral-assessment-to...</a>
    • bloppe23 hours ago
      I wonder if there&#x27;s a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you&#x27;re confident in your paper, you can think of it as a deposit. If you&#x27;re spamming journals, you&#x27;re just going to pay for the wasted time.<p>Maybe you get reimbursed for half as long as there are no obvious hallucinations.
      • JBorrow23 hours ago
        The journal that I&#x27;m an editor for is &#x27;diamond open access&#x27;, which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
        • NewsaHackO22 hours ago
          Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don&#x27;t think the advent of AI writing is going to affect how they are seen.
          • agnishom18 hours ago
            In the field of Programming Languages and Formal Methods, many of the top journals and conference proceedings are open access
        • lupire18 hours ago
          Who pays the operating expenses?
        • methuselah_in22 hours ago
          Welcome to new world of fake stuff i guess
      • azan_3 hours ago
        You must have no idea how scientific publishing works. Typical acceptance rate for ok&#x2F;good journal is 10-20% (and it was like that even before LLMs). Also it&#x27;s a great idea to make business of scientific publishing even more predatory - now sciencists writing articles for free, reviewing for free and then having to pay for publication will also have to pay to even submit something, with 90% chance of rejection. Also think what kind of incentives it will create.
      • willturman20 hours ago
        <i>If the penalty for a crime is a fine, then that law exists only for the lower class</i><p>In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn&#x27;t going to stop tobacco companies from spamming submissions that say smoking doesn&#x27;t cause lung cancer or social media companies from spamming submissions that their products aren&#x27;t detrimental to the mental health.
        • Majromax3 hours ago
          &gt; In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis.<p>That&#x27;s not the right threat model. The existing peer review process is already weak to high-effort but conflicted research.<p>Instead, the threat model is closer one closer to that of spam, where the submitting authors don&#x27;t care about the content of their submission at all but need X publications in high-impact outlets for their CV or grant application. Predatory journals exploit this as part of a pay-to-play problem, but the low reputation of those journals limits their desirable impact factor.<p>This threat model relies on frequent but low-quality submissions, and a submission fee would make taking multiple kicks at the can unviable.
        • bloppe13 hours ago
          I&#x27;m sure my crude idea has it&#x27;s shortcomings, but this feels superfluous. Deep-pocketed propagandists can do all sorts of things to pump their message whether a slop tax exists or not. There may or may not be existing countermeasures at journals for that. This just isn&#x27;t really about that. It&#x27;s about making sure that, in the process of spamming the journal, they also fund the review process, which would otherwise simply bleed time and money.
      • s0rce23 hours ago
        That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn&#x27;t a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
        • noitpmeder22 hours ago
          I mean your methodology also sounds suspect. You&#x27;re just going down a list until it sticks. You don&#x27;t care where it ends up (I&#x27;m sure within reason) just as long as it is accepted and published somewhere (again, within reason).
          • antasvara21 hours ago
            No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better&#x2F;worse. You don&#x27;t know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.<p>Plus, the t in me from submission to acceptance&#x2F;rejection can be long. For cutting edge science, you can&#x27;t really afford to wait to hear back before applying to another journal.<p>All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.
          • niek_pas22 hours ago
            Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.
          • jll2920 hours ago
            It&#x27;s standard practice, nothing suspect about their approach - and you won&#x27;t go lower and lower and lower still because at some point you&#x27;ll be tired of re-formatting, or a doctoral candidate&#x27;s funding will be used up, or the topic has &quot;expired&quot; (= is overtaken by reality&#x2F;competition).
          • azan_3 hours ago
            Are you at all aware of how scientific publishing works?
          • mathematicaster21 hours ago
            This is effectively standard across the board.
      • throwaway8582523 hours ago
        Pay to publish journals already exist.
        • eloisant5 hours ago
          I&#x27;m pretty sure the reviewers of those are still volunteers, the publisher is just making even more money!
        • bloppe23 hours ago
          This is sorta the opposite of pay to publish. It&#x27;s pay to be rejected.
        • olivia-banks23 hours ago
          I would think it would act more like a security deposit, and you&#x27;d get back 100%, no profit for the journal (at least in that respect).
      • pixelready22 hours ago
        I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
      • mathematicaster21 hours ago
        Pay to review is common in Econ and Finance.
        • skissane21 hours ago
          Variation I thought of on pay-to-review:<p>Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don&#x27;t want that, at least you&#x27;d acknowledge their assistance in the paper.<p>Because I think there are two types of people who might write AI slop papers: (1) people who just don&#x27;t care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don&#x27;t know what they are doing. Hiring an advisor could help the second group of people.<p>Of course, I don&#x27;t know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...
      • utilize180822 hours ago
        Better yet, make a &quot;polymarket&quot; for papers where people can bet on which paper can make it, and rely on &quot;expertise arbitrage&quot; to punish spams.
        • ezst22 hours ago
          Doesn&#x27;t stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.
          • utilize180819 hours ago
            Not if submissions require some small mandatory bet.
        • direwolf2018 hours ago
          Now accepting money from slop companies to verify their slop as notslop
      • petcat22 hours ago
        &gt; There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.<p>While well-intentioned, I think this is just gate-keeping. There are <i>mountains</i> of research that result in nothing interesting whatsoever (aside from learning about what doesn&#x27;t work). And all of that is still valuable knowledge!
        • ezst21 hours ago
          Sure, but now we can&#x27;t even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.<p>Maybe something like a &quot;hierarchy&#x2F;DAG? of trusted-peers&quot;, where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it&#x27;s found that the paper is &quot;undesirable&quot; and doesn&#x27;t pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:<p>- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted&#x2F;established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - &quot;bad actors&quot; are immediately punished and universally recognized as such - &quot;bad groups&quot; (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - &quot;good actors within a bad group&quot; are not penalised either because they could circumvent their &quot;bad group&quot; on the global review market by having reputable institutions (or intermediaries) certify their good work<p>There are loopholes to consider, like a black market of reputation trading (I&#x27;ll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.<p>Incidentally, I think this may be a rare case where a blockchain makes some sense?
          • jll2920 hours ago
            You have some good ideas there, it&#x27;s all about incentives and about public reputation.<p>But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the &quot;no double submission&quot; rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.<p>But all the other employees should not be penalized by the violations of 3 researchers.
          • gus_massa21 hours ago
            This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.<p>Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...
          • amitav121 hours ago
            How would this work for independent researchers?<p>(no snark)
    • Rperry217422 hours ago
      This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.<p>For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn&#x27;t remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.<p>Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as &quot;efficiency&quot;
      • SchemaLoad21 hours ago
        This has been discussed previously as &quot;workslop&quot;, where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.
      • lonelyasacloud6 hours ago
        &gt; Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as &quot;efficiency&quot;<p>Or the providers of the models are capable of providing accepted&#x2F;certified guarantees as to the quality of the output that their models and systems produce.
      • vitalnodo22 hours ago
        This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]<p>In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40295661">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40295661</a><p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=22368323">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=22368323</a>
    • pickleRick24319 hours ago
      I&#x27;m curious if you&#x27;d be in favor of other forms of academic gate keeping as well. Isn&#x27;t the lower quality overall of submissions (an ongoing trend with a history far pre-dating LLMs) an issue? Isn&#x27;t the real question (that you are alluding to) whether there should be limits to the democratization of science? If my tone seems acerbic, it is only because I sense cognitive dissonance between the anti-AI stance common among many academics and the purported support for inclusivity measures.<p>&quot;which is really not the point of these journals at all&quot;- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they&#x27;re agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?
      • JBorrow1 hour ago
        I don&#x27;t really understand how me saying that this tool isn&#x27;t good for science as gatekeeping. The vibe-written papers that I am talking about have little-to-no valuable scientific content, and as such would always be rejected. It&#x27;s just that it&#x27;s way easier to produce something that _looks_ reasonable from a five-second glance than before, and that causes additional load on an already strained system.<p>I also don&#x27;t understand your second paragraph at all.
      • agnishom18 hours ago
        &gt; whether there should be limits to the democratization of science?<p>That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.
        • pickleRick24317 hours ago
          LLMs are tools. In the hands of adept, conscientious researchers, they can only be a boon, assisting in the crafting of the research manuscript. In the hands of less adept, less conscientious users, they accelerate the production of slop. The poster I&#x27;m responding to seems to be noting an asymmetry- those who find the most use from these tools could be inept researchers who have no business submitting their work. This is because experienced researchers find writing up their results relatively easy.<p>To me, this is directly relevant to the issue of democratization of science. There seems to be a tool that is inconveniently resulting in the &quot;wrong&quot; people accelerating their output. That is essentially the complaint here rather than any criticism inherent to LLMs (e.g. water&#x2F;resource usage, environmental impact, psychological&#x2F;societal harm, etc.). The post I&#x27;m responding to could have been written if LLMs were replaced by any technology that resulted in less experienced or capable researchers disproportionately being able to submit to journals.<p>To be concrete, let&#x27;s just take one of prism&#x27;s capabilities- the ability to &quot;turn whiteboard equations or diagrams directly into LaTeX&quot;. What a monstrous thing to give to the masses! Before, those uneducated cranks would send word docs to journals with poorly typeset equations, making it a trivial matter to filter them into the trash bin. Now, they can polish everything up and pass off their chicken scratch as respectable work. Ideally, we&#x27;d put up enough obstacles so that only those who should publish will publish.
          • varjag3 hours ago
            See the point #1 in famous <i>Ten Signs a Claimed Mathematical Breakthrough is Wrong</i>:<p><a href="https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=304" rel="nofollow">https:&#x2F;&#x2F;scottaaronson.blog&#x2F;?p=304</a><p>By far the easiest quality signal is now out of the window.
          • agnishom14 hours ago
            The LLMs does assist the adept researchers in crafting their manuscript, but I do not think it makes the quality much better.<p>My objection is not that they are the &quot;wrong people&quot;. They are just regular people with excellent tools but not necessarily great scientific ideas.<p>Yes, it was easier to trash the crank&#x27;s work before based on their unLaTeXed diagrams. Now, they might have a very professional looking diagram, but their work is still not great mathematics. Except that now the editor has a much harder time finding out who submitted a worthwhile paper<p>In what way do you think the feature of &quot;LaTeXing a whiteboard diagram&quot; is democritizing mathematics? I do not think there are many people who have exceptional mathematical insights but are not able to publish them because they are not able to typeset their work properly.
            • pickleRick24312 hours ago
              The democratization is mostly in allowing people from outside the field with mediocre mathematical ideas to finally put them to paper and submit them to mediocre journals. And occasionally it might help a modern day Ramanujan with &quot;exceptional mathematical insights&quot; and a highly unconventional background to not have his work dismissed as that of a crank. Yes, most people with exceptional mathematical insights can typeset quite well. Democratization as I understand the term has quite a higher bar though.<p>Being against this is essentially to be in favor of a form of discrimination by proxy- if you can&#x27;t typeset, then likely you can&#x27;t do research either. And wouldn&#x27;t it be really annoying if those people who can&#x27;t research could magically typeset. It&#x27;s a fundamentally undemocratic impulse: Since those who cannot typeset well are unlikely to produce quality mathematics, we can (and should) use this as an effective barrier to entry. If you replace ability to typeset with a number of other traits, they would be rather controversial positions.
              • agnishom8 hours ago
                It would indeed be nice if there were a mechanism to find people like Ramanujan who have excellent insights but cannot communicate them effectively.<p>But LLMs are not really helping. With all the beautifully typeset papers with immaculate prose, Ramanujan&#x27;s papers are going to be buried deeper!<p>To some extent, I agree with you that it is a &quot;discrimination by proxy&quot;, especially with the typesetting example. But you could think of examples where cranks could very easily fool themselves into thinking that they understand the essence of the material without understanding the details. E.g, [I understand fluid dynamics very well. No, I don&#x27;t need to work out the differential equations. AI can do the bean counting for me.]
      • Eridrus15 hours ago
        The people on the inside often like all the gatekeeping.
    • MITSardine18 hours ago
      If I may be the Devil&#x27;s advocate, I&#x27;m not sure I fully agree with &quot;The hard part always has been, and always will be, understanding the research context (what&#x27;s been published before) and producing novel and interesting work (the underlying research)&quot;.<p>Plenty of researchers <i>hate</i> writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.<p>I don&#x27;t see an issue with generative writing in principle. The Devil is in the details, but I don&#x27;t see this as much different from &quot;hey grad student, write me this paper&quot;. And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.<p>I was initially a little indignated by the &quot;find me some plausible refs and stick them in the paper&quot; section of the video but, then again, isn&#x27;t this what most people already do? Just copy-paste the background refs from the colleague&#x27;s last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group &amp; friends produced since then.<p>My experience is most likely skewed (as all are), but I haven&#x27;t met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.<p>Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.
      • Otterly999 hours ago
        Thank you for bringing this nuanced view.<p>I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.<p>In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those &quot;lesser&quot; areas like writing and presenting your work.<p>Overall, I think this has the potential to reshape the publication system, which is long overdue.
      • raphman9 hours ago
        I am a rather slow writer who certainly might benefit from something like Prism.<p>A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).<p>Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims. Why won&#x27;t ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?
    • jjcm21 hours ago
      The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We&#x27;re already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.<p>This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.
    • maxkfranz22 hours ago
      I generally agree.<p>On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
      • ezst22 hours ago
        As I understand it, the problem isn&#x27;t publication or how it&#x27;s changing over time, it&#x27;s about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.
        • maxkfranz21 hours ago
          That’s a good point. On the other hand, we’ve had that problem long before AI. You already need to mentally filter papers based on your assessment of the reputability of the authors.<p>The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.
    • egorfine7 hours ago
      &gt; these kinds of tools cause many more problems than they actually solve<p>For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.<p>Same with Microsoft: none of the AI stuff they are shoving down the users&#x27; throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.<p>Similar with Google although no one can deny real innovation happening there.
    • mrandish22 hours ago
      As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.<p>&gt; &gt; who are looking to &#x27;boost&#x27; their CV<p>Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
    • i00017 hours ago
      Perhaps the real issue is the gate-keeping scientific publishing model. Journals had a place and role, and peer-review is a critical aspect of the scientific process but new times (internet, citizien science, higher levels of scientific literacy, and now AI) diminish the benefits of journals creating &quot;barriers to entry&quot; as you put it.
      • desolate_muffin17 hours ago
        I for one hope not to live in a world where academic journals fall out of favor and are replaced by vibe-coded papers by citizen scientists with inflated egos from one too many “you’re absolutely right!” Claude responses.
        • i00015 hours ago
          Me neither, but what you present is a false dichotomy. Science used to be a past time of the wealthy elites, it became a profession. By opening up it up progrss was accelerated. Same will happen when publication will be made more open and accessible.
          • BlueTemplar5 hours ago
            And then, Einstein was a « citizen scientist », wasn&#x27;t he ?
    • boplicity22 hours ago
      Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an &quot;honor system&quot; but maybe it could help weed out papers not worth the time?
      • currymj21 hours ago
        this is probably a net negative as there are many very good scientists with not very strong English skills.<p>the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.
        • BlueTemplar5 hours ago
          But then, assuming we are fine with this state of things with LLMs :<p>why would it be upon them to submit in English, when instead reviewers and readers can themselves use a LLM translator to read the paper ?
    • jasonfarnon19 hours ago
      I&#x27;m certain your journal will be using LLMs in reviewing incoming articles, if they aren&#x27;t already. I also don&#x27;t think this is in response to the flood of LLM generated articles. Even if authors were the same as pre-LLM, journals would succumb to the temptation, at least at the big 5 publishers, which already have a contentious relationship with the referees.
    • jascha_eng21 hours ago
      Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after &quot;vibe writing&quot;.<p>These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.
      • direwolf2017 hours ago
        What do credentials have to do with good science? There are already some roadblocks to publish science in important–sounding journals, but it&#x27;s important for the neutrality of the scientific process that in principle anyone can do it.
    • eloisant5 hours ago
      The real problem is that researchers are pushed to publish as their publication is the only way their career can advance. It&#x27;s not even to &quot;boost&quot; your CV, as a researcher your publication history IS your CV.<p>It was already a problem 25 years ago when I did my Ph.D., and I don&#x27;t think things changed that much since then.<p>This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.<p>I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.<p>Anyway - research slop started way before AI. It&#x27;s probably going to make the problem worse, but the root issue have been there for a long time.
    • parentheses13 hours ago
      This dynamic would create even more gate-keeping using credentials, which is already a problem with academia.
    • keithnz21 hours ago
      wouldn&#x27;t AI actually be good for filtering given it&#x27;s going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.
    • usefulposter23 hours ago
      Completely agree. Look at the independent research that gets submitted under &quot;Show HN&quot; nowadays:<p><a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=pastYear&amp;page=0&amp;prefix=true&amp;query=%22academia.edu%22&amp;sort=byDate&amp;type=story" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=pastYear&amp;page=0&amp;prefix=tru...</a><p><a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=pastYear&amp;page=0&amp;prefix=true&amp;query=%22zenodo%22%20Show%20HN&amp;sort=byDate&amp;type=story" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=pastYear&amp;page=0&amp;prefix=tru...</a>
    • lupsasca23 hours ago
      I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I&#x27;m someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I&#x27;m excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
      • fuzzfactor5 hours ago
        This is what I see, you need more of an active, accomplished helper at the keyboard.<p>If I can&#x27;t have that, the next best thing is a helper while I&#x27;m at the keyboard my damn self.<p>&gt;Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting<p>This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970&#x27;s, and a more comprehensive number of scientists once again have their own secretary.<p>Looks like the engineers at Crixet were tired of waiting.
      • CJefferson22 hours ago
        What the heck is the point of a reference you never read?
        • lupsasca21 hours ago
          By &quot;grabbing references&quot; I meant queries of the type &quot;add paper [bla] to the bibliography&quot; -- that seems useful to me!
          • nestes21 hours ago
            Focusing in on &quot;grabbing references&quot;, it&#x27;s as easy as drag-and-drop if you use Zotero. It can copy&#x2F;paste references in BibTeX format. You can even customize it through the BetterBibTeX extension.<p>If you&#x27;re not a Zotero user, I can&#x27;t recommend it enough.
            • MITSardine18 hours ago
              I have a terrible memory for details, I&#x27;ll admit an LLM I can just tell &quot;Find that paper by X&#x27;s group on Method That Does This And That&quot; and finds me the paper is enticing. I say this because I abandoned Zotero once the list of refs became large enough that I could never find anything quickly.
      • noitpmeder22 hours ago
        AI generating references seems like a hop away from absolute unverifiable trash.
    • SecretDreams22 hours ago
      I appreciate and sympathize with this take. I&#x27;ll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.<p>This is a space that probably needs substantial reform, much like grad school models in general (IMO).
  • parentheses13 hours ago
    It feels generally a bit dangerous to use an AI product to work on research when (1) it&#x27;s free and (2) the company hosting it makes money by shipping productized research
    • roflmaostc4 hours ago
      I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).<p>So yes, you use it to write the paper but soon it is public knowledge anyway.<p>I am not sure if there is much to learn from the draft of the authors.
    • biscuit1v98 hours ago
      Why do you think these points would make the usage dangerous?
    • z3t410 hours ago
      They have to monetize somehow...
  • raincole18 hours ago
    I know many people have negative opinions about this.<p>I&#x27;d also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N &gt; 5) has been writing papers in our native language and translating them with GPT-4o <i>exclusively</i>. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.
    • direwolf2018 hours ago
      Translation is something Large Language Models are inherently pretty good at, without controversy, even though the output still should be independently verified. It&#x27;s a language task and they are language models.
      • kccqzy17 hours ago
        Of course. Transformers were originally invented for Google Translate.
      • biophysboy17 hours ago
        Are they good at translating scientific jargon specific to a niche within a field? I have no doubt LLMs are excellent at translating well-trodden patterns; I&#x27;m a bit suspicious otherwise..
        • andy12_7 hours ago
          In my experience of using it to translate ML work between English-&gt;Spanish|Galician, it seems to literally translate jargon too eagerly, to the point that I have to tell it to maintain specific terms in English to avoid it sounding too weird (for most modern ML jargon there really isn&#x27;t a Spanish translation).
        • mbreese17 hours ago
          It seems to me that jargon would tend to be defined in one language and minimally adapted in other languages. So I’d not sure that would be much of a concern.
          • fuzzfactor4 hours ago
            I would look at non-English research papers along with the English ones in my field and the more jargon and just plain numbers and equations there were, the more I could get out of it without much further translation.
        • disconcision17 hours ago
          for better or for worse, most specific scientific jargon is already going to be in english
    • ivirshup18 hours ago
      I&#x27;ve heard that now that AI conferences are starting to check for hallucinated references, rejection rates are going up significantly. See also the Neurips hallucinated references kerfuffle [1]<p>[1]: <a href="https:&#x2F;&#x2F;statmodeling.stat.columbia.edu&#x2F;2026&#x2F;01&#x2F;26&#x2F;machine-learning-research-is-not-serious-research-and-therefore-hallucinated-references-are-not-necessarily-a-big-deal-agrees-a-prestigious-group-of-machine-learning-researchers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;statmodeling.stat.columbia.edu&#x2F;2026&#x2F;01&#x2F;26&#x2F;machine-le...</a>
      • doodlesdev15 hours ago
        Honestly, hallucinated references should simply get the submitter banned from ever applying again. Anyone submitting papers or anything with hallucinated references shall be publicly shamed. The problem isn&#x27;t only the LLMs hallucinating, it&#x27;s lazy and immoral humans who don&#x27;t bother to check the output either, wasting everyone&#x27;s time and corroding public trust in science and research.
        • lionkor7 hours ago
          I fully agree. Not reading your own references should be grounds for banning, but that&#x27;s impossible to check. Hallucinated references cannot be read, so by definition,they should get people banned.
          • fuzzfactor4 hours ago
            &gt;Not reading your own references<p>This could be considered in degrees.<p>Like when you only need a single table from another researcher&#x27;s 25-page publication, you would cite it to be thorough but it wouldn&#x27;t be so bad if you didn&#x27;t even read very much of their other text. Perhaps not any at all.<p>Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?
      • SilverBirch7 hours ago
        Yeah that&#x27;s not going to work for long. You can draw a line in 2023, and say &quot;Every paper before this isn&#x27;t AI&quot;. But in the future, you&#x27;re going to have AI generated papers citing other AI slop papers that slipped through the cracks, because of the cost of doing reseach vs the cost of generating AI slop, the AI slop papers <i>will</i> start to outcompete the real research papers.
        • fuzzfactor4 hours ago
          &gt;the cost of doing reseach vs the cost of generating<p>&gt;slop papers <i>will</i> start to outcompete the real research papers.<p>This started to rear its ugly head when electric typewriters got more affordable.<p>Sometimes all it takes is faster horses and you&#x27;re off to the races :\
    • utopiah13 hours ago
      It&#x27;s quite a safe case if you maintain provenance because there is a ground truth to compare to, namely the untranslated paper.
  • asveikau20 hours ago
    Good idea to name this after the spy program that Snowden talked about.
    • pazimzadeh20 hours ago
      idk if OpenAI knew that Prism is already a very popular desktop app for scientists and that it&#x27;s one of the last great pieces of optimized native software?<p><a href="https:&#x2F;&#x2F;www.graphpad.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.graphpad.com&#x2F;</a>
      • varjag20 hours ago
        They don&#x27;t care. Musk stole a chunk Heinlein&#x27;s literary legacy with Grok (which unlike prism wasn&#x27;t a common word) and noone bat an eye.
        • DonaldPShimoda20 hours ago
          &gt; Grok (which unlike prism wasn&#x27;t a common word)<p>&quot;Grok&quot; was a term used in my undergrad CS courses in the early 2010s. It&#x27;s been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.
          • Fnoord18 hours ago
            Wikipedia about Groklaw [1]<p>&gt; Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones (&quot;PJ&quot;), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.<p>&gt; Its name derives from &quot;grok&quot;, roughly meaning &quot;to understand completely&quot;, which had previously entered geek slang.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Groklaw" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Groklaw</a>
          • varjag7 hours ago
            Grok was specifically coined by Heinlein in _Stranger in a Strange Land_. It&#x27;s been used in nerd circles for decades before your undergrad times but was never broadly known.
          • milleramp17 hours ago
            He is referencing the book Stranger in a Strange Land, written in 1961.
        • sincerely18 hours ago
          Grok has been nerd slang for a while. I bet it&#x27;s in that ESR list of hacker lingo. And hell if every company in silicon valley gets to name their company after something from Lord of the Rings why can&#x27;t he pay homage to an author he likes
        • Fnoord20 hours ago
          He stole a letter, too.
          • tombert20 hours ago
            That bothers more than it should. Every single time I see a new post about Twitter, I think that there&#x27;s some update for X11 or X Server or something, only to be reminded that Twitter has been changed.
      • intothemild20 hours ago
        I very much doubt they knew much about what they were building if they didn&#x27;t know this.
    • XCSme17 hours ago
      I thought this was about the Prism Database ORM. Or that was Prisma?
  • bmaranville20 hours ago
    Having a chatbot that can natively &quot;speak&quot; latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that&#x27;s not how people will use it...<p>I would note that Overleaf&#x27;s main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.
  • drakenot2 hours ago
    This is handy for maintaining a resume!<p>I converted my resume to LaTeX with Claude Code recently. Being able to iterate on this code-form of my document is so much nicer than fighting the formatting with in Word&#x2F;Google Docs.<p>I dropped my .tex file into Prism and it makes it nice to instantly render it.
  • plastic04120 hours ago
    The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what&#x27;s the point of citing papers that aren&#x27;t referenced in the paper you&#x27;re actually writing? Can you do that?<p>Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.
    • parsimo201020 hours ago
      A common approach to research is to do literature review first, and build up a library of citable material. Then when writing your article, you summarize the relevant past research and put in appropriate citations.<p>To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.<p>Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).
      • plastic04119 hours ago
        &gt; a bibliography (a list of relevant works but not necessarily cited)<p>Didn&#x27;t know that there&#x27;s difference between bibliography and cited work. thank you.
      • suddenlybananas7 hours ago
        Yes but you should read your bibliography.
    • alphazard20 hours ago
      I once took a philosophy class where an essay assignment had a minimum citation count.<p>Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms. Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).<p>The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.
      • iterance12 hours ago
        In coursework, references are often a way of demonstrating the reading one did on a topic before committing to a course of argumentation. They also contextualize what exactly the student&#x27;s thinking is in dialogue with, since general familiarity with a topic can&#x27;t be assumed in introductory coursework. Citation minimums are usually imposed as a means of encouraging a student to read more about a topic before synthesizing their thoughts, and as a means of demonstrating that work to a professor. While there may have been administrative reasons for the citation minimum, the concept behind them is not unfounded, though they are probably not the most effective way of achieving that goal.<p>While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.
      • bonsai_spool19 hours ago
        &gt; Citing a paper to defend your position is just an appeal to authority<p>Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in &#x27;established&#x27; logic).<p>An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?
        • tyre12 hours ago
          It’s also a way of getting people to read things about the subject that they otherwise wouldn’t. I read a lot of philosophy because it was relevant to a paper I was writing, but wasn’t assigned to the entire class.
      • _bohm18 hours ago
        Huh? It&#x27;s quite sensible to make reference to someone else&#x27;s work when writing a philosophy paper, and there are many ways to do so that do not amount to an appeal to authority.
        • bogdan13 hours ago
          He&#x27;s point is that they asked for a <i>minimum</i> number of references not references in general
      • fxwin11 hours ago
        &gt; Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).<p>an appeal to authority is fallacious when the authority is unqualified for the subject at hand. Citing a paper from a philosopher to support a point isn&#x27;t fallacious, but &quot;&lt;philosophical statement&gt; because my biology professor said so&quot; is.
  • danelski20 hours ago
    Many people here talk about Overleaf as if it was the &#x27;dumb&#x27; editor without any of these capabilities. It had them for some time via Writefull integration (<a href="https:&#x2F;&#x2F;www.writefull.com&#x2F;writefull-for-overleaf" rel="nofollow">https:&#x2F;&#x2F;www.writefull.com&#x2F;writefull-for-overleaf</a>). Who&#x27;s going to win will probably be decided by brand recognition with Overleaf having a better starting position in this field, but money obviously being on OAI&#x27;s side. With some of Writefull&#x27;s features being dependent on ChatGPT&#x27;s API, it&#x27;s clear they are set to be priced-out unless they do something smart.
  • PrismerAI4 hours ago
    Prismer-AI team here. We’ve actually been building an open-source stack for this since early 2025. We were fed up with the fragmented paper-to-code workflow too. If you&#x27;re looking for an open-source alternative to Prism that&#x27;s already modular and ready to fork, check us out: <a href="https:&#x2F;&#x2F;github.com&#x2F;Prismer-AI&#x2F;Prismer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Prismer-AI&#x2F;Prismer</a>
  • tyteen4a035 hours ago
    If you&#x27;re not a fan of OpenAI: I work at RSpace (<a href="https:&#x2F;&#x2F;github.com&#x2F;rspace-os&#x2F;rspace-web" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rspace-os&#x2F;rspace-web</a>) and we&#x27;re an open-source research data management system. While we&#x27;re not as modern as Obsidian or NotebookLM (yet - I&#x27;m spearheading efforts to change that :)) we have been deployed at universities and institutions for years now.<p>The solution is currently quite focused on life science needs but if you&#x27;re curious, check us out!
  • DominikPeters1 day ago
    This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
    • qbit4221 hours ago
      Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren&#x27;t using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.<p>I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.
    • mturmon21 hours ago
      Getting close to the &quot;why Dropbox when you can rsync&quot; mistake (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9224">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9224</a>)<p>@vicapow replied to keep the Dropbox parallel alive
      • DominikPeters10 hours ago
        Yeah I realized the parallel while I was writing my comment! I guess what I&#x27;m thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.
    • vicapow23 hours ago
      I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.<p>You&#x27;re right that something like Cursor can work if you&#x27;re familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don&#x27;t want to and really shouldn&#x27;t have to figure out how to work for their specific workflows.
    • yfontana11 hours ago
      &gt; Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.<p>I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.
    • jstummbillig23 hours ago
      Accessibility does matter
  • rockskon17 hours ago
    Naming their tool after the program where private companies run searches on behalf of and give resulting customer data to the NSA....was certainly a choice.
    • razster15 hours ago
      Sir, my tin hat is on.
  • beklein21 hours ago
    The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&amp;A. The YouTube link is here: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=W2cBTVr8nxU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=W2cBTVr8nxU</a>
    • swyx20 hours ago
      oh i was here to post it haha - thank you for doing that job for me so I&#x27;m not a total shill. I really enjoyed meeting them and was impressed by the sheer ambition of the AI for Science effort at OAI - in some sense I&#x27;m making a 10000x smaller scale bet than OAI on AI for Science &quot;taking off&quot; this year with the upcoming dedicated Latent Space Science pod.<p>generally think that there&#x27;s a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don&#x27;t traditionally think of themselves as &quot;in science&quot; to cross over into the &quot;ai for hard STEM&quot; because it turns out that 1) they need you 2) you can fill in what you don&#x27;t know 3) it will be impactful&#x2F;challenging&#x2F;rewarding 4) we&#x27;ve exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting&#x2F;change-history-forever hard problems are basically at this frontier<p>*conscious exaggeration sorry
      • beklein11 hours ago
        Wasn&#x27;t aware you&#x27;re so active on HN; sorry for stealing your karma.<p>Love the idea of a dedicated series&#x2F;pod where normal people take on hard problems by using and leveraging the emergent capabilities of frontier AI systems.<p>Anyway, thanks for pod!
    • vicapow21 hours ago
      Hope you like it :D I&#x27;m here if you have questions, too
  • jumploops23 hours ago
    I’ve been “testing” LLM willingness to explore novel ideas&#x2F;hypotheses for a few random topics[0].<p>The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.<p>After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].<p>I’m curious to see how&#x2F;if we can strike the right balance with an LLM focused on scientific exploration.<p>[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.<p>[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.
  • anon125311 hours ago
    Slightly off-topic but related: currently I&#x27;m in a research environment (biomedicine) where a lot of AI is used. Sometimes well, often poorly. So as an exercise I drafted some rules and commitments about AI and research (&quot;Research After AI: Principles for Accelerated Exploration&quot; [1]), I took the Agile manifesto as a starting point. Anyways, this might be interesting as a perspective on the problem space as I see it.<p>[1] <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;joelkuiper&#x2F;d52cc0e5ff06d12c85e492e4295ca890" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;joelkuiper&#x2F;d52cc0e5ff06d12c85e492e42...</a>
  • maest19 hours ago
    Burried halfway through the article.<p>&gt; Prism is a free workspace for scientific writing and collaboration
  • falcor8421 hours ago
    It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I&#x27;m very ambivalent about it.
    • Ronsenshi17 hours ago
      Just more coal to the hype-train - AI companies can&#x27;t afford news cycle without anything AI. Stock prices must grow!
  • sva_22 hours ago
    &gt; In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,<p>I can&#x27;t wait
  • smuenkel2 hours ago
    That click towards accepting the bibliography without checking it is absolutely mindboggling.
  • jeffybefffy51923 hours ago
    I postulate 90% of the reason openai now has &quot;variants&quot; for different use cases is just to capture training data...
    • cauliflower271818 hours ago
      ChatGPT lets you refuse to allow your content to be used for training (under Preferences -&gt; Data controls), but Prism does not.
  • mfld8 hours ago
    I&#x27;d like to hypothesize a little bit about the strategy of OpenAI. Obviously, it is nice for academic users that there is a new option for collaborative LaTeX editing plus LLM integration for free. At the same time, I don&#x27;t think there is much added revenue expected here, for example, from Pro features or additional LLM usage plans. My theory is that the value lies in the training data received from highly skilled academics in the form of accepted and declined suggestions.
    • sn0wr8ven7 hours ago
      It is nice for academics, but I would ask why? These aren&#x27;t tasks you can&#x27;t do yourself. Yes it&#x27;s all in one place, but it&#x27;s not like doing the exact same thing previously was ridiculous to setup.<p>A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn&#x27;t take any setup. People do this with or without this tool almost identically.
    • hdivider6 hours ago
      Even that would be quite niche for OpenAI. They raised far too much capital, and now have to deliver on AGI, fast. Or an ultra-high-growth segment, which has not materialized.<p>The reason? I can give you the full source for Sam Altman:<p>while(alive) { RaiseCapital() }<p>That is the full extent of Altman. :)
  • bonsai_spool19 hours ago
    The example proposed in &quot;and speeding up experimental iteration in molecular biology&quot; has been done since at least the mid-2000s.<p>It&#x27;s concerning that this wasn&#x27;t identified and augur poorly for their search capabilities.
  • vitalnodo23 hours ago
    With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.
    • vessenes23 hours ago
      Don’t forget replication!
      • olivia-banks23 hours ago
        I&#x27;m curious how you think AI would aide in this.
        • vessenes22 hours ago
          Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.<p>Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance&#x2F;power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.
        • noitpmeder22 hours ago
          Replicate this &lt;slop&gt;<p>Ok! Here&#x27;s &lt;more slop&gt;
          • olivia-banks22 hours ago
            I don&#x27;t think you understand what replication means in this context.
            • NateEag18 hours ago
              I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.
  • markbao23 hours ago
    Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…
    • crazygringo22 hours ago
      100% completely agreed. It&#x27;s not the future, it&#x27;s the past.<p>Typst feels more like the future: <a href="https:&#x2F;&#x2F;typst.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;typst.app&#x2F;</a><p>The problem is that so many journals require certain LaTeX templates so Typst often isn&#x27;t an option at all. It&#x27;s about network effects, and journals don&#x27;t want to change their entire toolchain.
      • lmc8 hours ago
        I&#x27;ve had some good initial results in going from typst to .tex with Claude (Opus 4.5) for an IEEE journal paper - idiomatic use of templates etc.
    • maxkfranz22 hours ago
      Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn&#x27;t want to write in Latex generally either.<p>The main feature that&#x27;s important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.
    • probably_wrong18 hours ago
      Academic here. Working on MS Word after years of using LaTeX is... hard. With LaTex I can be reassured that the formatting will be 95% fine and the 5% remaining will come down to taste (&quot;why doesn&#x27;t this Figure show in this page?&quot;) while in Word I&#x27;m constantly fighting the layout - delete one line? Your entire paragraph is now bold. Changed the font of the entire text? No, that one paragraph ignores you. Want to delete that line after that one Table? F you, you&#x27;re not. There&#x27;s a reason why this video joke [1] got 14M views.<p>And then I need an extra tool for dealing with bibliography, change history is unpredictable (and, IMO, vastly inferior to version control), and everything gets even worse if I open said Word file in LibreOffice.<p>LaTeX&#x27; syntax may be hard, but Word actively fights me during writing.<p>[1] Moving a photo in Microsoft Word - <a href="https:&#x2F;&#x2F;www.instagram.com&#x2F;jessandquinn&#x2F;reel&#x2F;DIMkKkqODS5&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.instagram.com&#x2F;jessandquinn&#x2F;reel&#x2F;DIMkKkqODS5&#x2F;</a>
    • auxym23 hours ago
      Agreed. Tex&#x2F;Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).<p>I haven&#x27;t tried it yet but Typst seems like a promising replacement: <a href="https:&#x2F;&#x2F;typst.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;typst.app&#x2F;</a>
    • hatmatrix18 hours ago
      That study must have compared beginners in LaTeX and MS Word. There is a learning curve, but LaTeX will often save more time in the end.<p>It is an old language though. LaTeX is the macro system on top of TeX, but now you can write markdown or org-mode (or orgdown) and generate LaTeX -&gt; PDF via pandoc&#x2F;org-mode. Maybe this is the level of abstraction we should be targeting. Though currently, you still need to drop into LaTeX for very specific fine-tuning.
  • unicodeveloper5 hours ago
    Not too bad an acquisition though. Scientists need more tech tools just like everyone else to accelerate their work. The faster scientists are, the more discoveries &amp; world class solutions to problems we can have.<p>Maybe OpenAI should acquire Valyu too. They allow you deepresearch on academic papers.
  • sbszllr23 hours ago
    The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
    • einpoklum11 hours ago
      They don&#x27;t call it PRISM for nothing my friend...<p>The collect chat records for any number of users, not the least of which being NSA surveillance and analysis - highly likely given what we know from the Snowden leaks.
  • random_duck1 hour ago
    So you build overleaf with bloat?
  • reassess_blind22 hours ago
    Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…
    • torginus21 hours ago
      I haven&#x27;t used MS Word in quite a while, but I distinctly remember it changed minus signs to em dashes.
    • jedberg19 hours ago
      &gt; because they’re trying to normalise the AI’s writing style,<p>AIs use em dashes because competent writers have been using em dashes for a long time. I really hate the fact that we assume em dash == AI written. I&#x27;ve had to stop using em dashes because of it.
      • noname12016 hours ago
        Likewise, I’m now reluctant to use any em dashes these days because unenlightened people immediately assume that it’s AI. I used em dashes way before AI decided these were cool
    • flumpcakes21 hours ago
      LaTeX made writing Em dashes very easy. To the point that I would use them all the times in my academic writing. It&#x27;s a shame that perfectly good typography is now a sign of slop&#x2F;fraud.
    • reed123422 hours ago
      Probably used their product to write it
    • exyi21 hours ago
      ... or they teached GPT to use em-dashes, because of their love for em-dashes :)
  • r_thambapillai2 hours ago
    didn&#x27;t OpenAI just say they needed a code red to be relentlessly focussed on making ChatGPT market leading again? Why are they launching new products? Is the code red over is the gemini threat considered done?
  • WolfOliver1 day ago
    Check out MonsterWriter if you are concerned about the recent acquisition of this.<p>It also offers LaTeX workspaces<p>see video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=feWZByHoViw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=feWZByHoViw</a>
  • MattDaEskimo23 hours ago
    What&#x27;s the goal here?<p>There was an idea of OpenAI charging commission or royalties on new discoveries.<p>What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?
    • engineer_2222 hours ago
      &gt; Prism is free to use, and anyone with a ChatGPT account can start writing immediately.<p>Maybe it&#x27;s cynical, but how does the old saying go? If the service is free, <i>you</i> are the product.<p>Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they&#x27;ll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
  • uwehn19 hours ago
    If you&#x27;re looking for something like this for typst: any VSCode fork with AI (Cursor, Antigravity, etc) plus the tinymist extension (<a href="https:&#x2F;&#x2F;github.com&#x2F;Myriad-Dreamin&#x2F;tinymist" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Myriad-Dreamin&#x2F;tinymist</a>) is pretty nice. Since it&#x27;s local, it won&#x27;t have the collaboration&#x2F;sharing parts built in, but that can be solved too in the usual ways.
  • epolanski20 hours ago
    Not gonna lie, I cringed when it asked to insert citations.<p>Like, what&#x27;s the point?<p>You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.<p>As someone who&#x27;s been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I&#x27;m beyond appalled.<p>Let me explain how scientific publishing works to people out of the loop:<p>- science is an insanely huge domain. Basically as soon as you drift in <i>any</i> topic the number of reviewers with the capability to understand what you&#x27;re talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn&#x27;t matter what the topic is, at elite level required to really understand what&#x27;s going on and catch errors or bs, it&#x27;s very small clubs.<p>2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).<p>3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.<p>4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can&#x27;t imagine now.
    • SoKamil20 hours ago
      It’s like not only the technology is to blame, but the culture and incentives of modern world.<p>The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.
      • epolanski19 hours ago
        This isn&#x27;t about paying your bills, but having a chance of becoming a full time researcher or professor in academia which is obviously the ideal career path for someone interested in science.<p>All those people can go work for private companies, but few as scientists rather than technicians or QAs.
    • bonsai_spool19 hours ago
      &gt; But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can&#x27;t imagine now.<p>Hmm, I follow the argument, but it&#x27;s inconsistent with your assertion that there is going to be incentive for &#x27;proper work&#x27; over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about &#x27;asian researchers&#x27; ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).
      • epolanski9 hours ago
        Japan is notoriously an exception in the region.<p>South Korea and China produce huge amounts non reproducible experiments.
  • AuthAuth23 hours ago
    This does way less than i&#x27;d expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
  • pwdisswordfishy22 hours ago
    Oh, like that mass surveillance program!
  • addedlovely1 hour ago
    Ahhhh. It happily re-wrote the example paper to be from Google AI and added references that supported that falsehood.<p>Slop science papers is just what the world needs.
  • butlike4 hours ago
    &gt; Prism is free to use, and anyone with a ChatGPT account can start writing immediately.<p>Great, so now I&#x27;ll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?
    • azan_4 hours ago
      Don&#x27;t worry - most of the peer reviewed stuff is also bad.
  • arnejenssen10 hours ago
    This assumes that the article, the artifact, is most valuable. But often it is the process of writing the article that has the most value. Prism can be a nice tool for increasing output. But the second order consequence could be that the skill of deep thinking and writing will atrophy.<p>&quot;There is no value added without sweating&quot;
    • lionkor7 hours ago
      Work is value and produces sweat, and OpenAI sells just the sweat.
  • radioactivist22 hours ago
    Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn&#x27;t seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can&#x27;t seem to edit the comments once typed.
    • lxe22 hours ago
      Thanks for surfacing this. If you click to &quot;tools&quot; button to the left of &quot;compile&quot;, you&#x27;ll see a list of comments, and you can resolve them from there. We&#x27;ll keep improving and fixing things that might be rough around the edges.<p>EDIT: Fixed :)
  • melagonster17 hours ago
    Prism is a famous software before OpenAI use this name: <a href="https:&#x2F;&#x2F;www.graphpad.com&#x2F;features" rel="nofollow">https:&#x2F;&#x2F;www.graphpad.com&#x2F;features</a>
  • estebarb11 hours ago
    I&#x27;m really surprised OpenAI went with LaTeX. ChatGPT still has issues maintaining LaTeX syntax. It still happily switches to markdown notation for quotes or emph. Gemini has a similar problem as well. I guess that there aren&#x27;t enough good LaTeX documents in the training set.
  • tzahifadida5 hours ago
    Since it offers collaboration for free, it can take a bite out of overleaf market.
  • slashdave1 hour ago
    Not a PR person myself, but why use as an example a parody topic for a paper? Couldn&#x27;t someone have invented something realistic to show? Or, heck, just get permission to show a real paper?<p>The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.
  • flockonus21 hours ago
    Curious in terms of trademark, does it could infringe in Vercel&#x27;s Prisma (very popular ORM &#x2F; framework in node.js) ?<p>EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)
    • mkl10 hours ago
      I think it may be a generic word that&#x27;s hard to trademark or something, as the existing scientific analysis software called Prism (<a href="https:&#x2F;&#x2F;www.graphpad.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.graphpad.com&#x2F;</a>) doesn&#x27;t seem to be trademarked; the Trademarks link at the bottom goes to this list, which doesn&#x27;t include Prism: <a href="https:&#x2F;&#x2F;www.dotmatics.com&#x2F;trademarks" rel="nofollow">https:&#x2F;&#x2F;www.dotmatics.com&#x2F;trademarks</a>
    • bitpush21 hours ago
      <a href="https:&#x2F;&#x2F;github.com&#x2F;prisma&#x2F;prisma" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;prisma&#x2F;prisma</a> is its own thing, yeah? not affiliated with Vercel AFAICT.
    • wetpaws21 hours ago
      [dead]
  • pmbanugo6 hours ago
    I don&#x27;t see anything fancy here that Google doesn&#x27;t do with their Gemini products, and even better
  • ozgung10 hours ago
    I don’t see anything regarding Privacy of your data. Did I miss it or they just use your unpublished research and your prompts as a real human researcher to train their own AI researchers?
  • jf___12 hours ago
    &lt;typst&gt;and just when i thought i was out they pull me back in&lt;&#x2F;typst&gt;
  • ILoveHorses12 hours ago
    So, basically SciGen[<a href="https:&#x2F;&#x2F;davidpomerenke.github.io&#x2F;scigen.js&#x2F;" rel="nofollow">https:&#x2F;&#x2F;davidpomerenke.github.io&#x2F;scigen.js&#x2F;</a>] but burning through more GPUs?
  • nxobject20 hours ago
    What they mean by &quot;academic&quot; is fairly limited here, if LaTeX is the main writing platform. What are their plans for expanding past that, and working with, say Jane Biomedical Researcher with a GSuite or Microsoft org, that has to use Word&#x2F;Docs and a redlining-based collaboration workflow? I can certainly see why they&#x27;re making it free at this point.<p>FWIW, Google Scholar has a fairly compelling natural-language search tool, too.
  • jonas_kgomo17 hours ago
    I actually found it quite robinhood for openai to acqhire, bascially this startup was my favourite thing for the past few months, but they were experiencing server overload and other issues on reliability, i think openai taking them under their wing is a good&#x2F;neutral storyline. I think its net good for science given the opai toolchain
  • Myrmornis14 hours ago
    Away from applied math&#x2F;stats, and physics etc, not that many scientists use LaTeX. I&#x27;m not saying it&#x27;s not useful, just I don&#x27;t think many scientists will feel like a product that&#x27;s LaTeX based is intended for them.
    • plutomeetsyou14 hours ago
      Economists definitely use LaTeX, but as a field, it&#x27;s at the intersection of applied math and social sciences so your point stands. I also know some Data Scientists in the industry who do.
  • homerowilson17 hours ago
    Adding<p>% !TEX program = lualatex<p>to the top of your document allows you to switch LaTeX engine. This is required for recent accessibility standards compliance (support for tagging and \DocumentMetadata). Compilation takes a bit longer though, but works fine, unlike with Overleaf where using the lualatex engine does not work in the free version.
    • gerdesj17 hours ago
      How on earth is that pronounced?
      • mkl10 hours ago
        TeX is pronounced Teck or with a sound like in Bach or loch. Derivatives like Latex and Lualatex are similar.
      • gverrilla16 hours ago
        How on lua is that pronounced?
  • khalic23 hours ago
    All your papers are belong to us
    • vicapow23 hours ago
      Users have full control over whether their data is used to help improve our models
      • chairhairair20 hours ago
        Never trust Sam Altman.<p>Even if yall don’t train off it he’ll find some other way.<p>“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug&#x27;s sales”<p><a href="https:&#x2F;&#x2F;www.businessinsider.com&#x2F;openai-cfo-sarah-friar-future-revenue-sources-2026-1" rel="nofollow">https:&#x2F;&#x2F;www.businessinsider.com&#x2F;openai-cfo-sarah-friar-futur...</a>
      • danelski20 hours ago
        Only the defaults matter.
  • CobrastanJorji20 hours ago
    &quot;Hey, you know how everybody&#x27;s complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?&quot;<p>&quot;Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining.&quot;<p>&quot;Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it.&quot;<p>&quot;I dunno, does anybody want that?&quot;<p>&quot;Who cares, we&#x27;re fucked in about two years if we don&#x27;t figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want.&quot;<p>&quot;Yeah, I guess you&#x27;re right, let&#x27;s do your scientific paper generation thing.&quot;
  • bariswheel18 hours ago
    I used overleaf during grad school and was easy enough, I&#x27;m interested to see what more value this will bring. Sometimes making less decisions is the better route, e.g. vi vs MS word, but I won&#x27;t speak too much without trying it just yet.
  • ggm19 hours ago
    A competition for the longest sequence of \relax in a document ensues. If enough people do this, the AI will acquire merit and seek to &quot;win&quot; ...
  • flumpcakes21 hours ago
    This is terrible for Science.<p>I&#x27;m sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We&#x27;ve been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).<p>All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.<p>Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.<p>Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.<p>We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).<p>I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.
    • jimmar21 hours ago
      I&#x27;ve wasted hours of my life trying to get Latex to format my journal articles to different journals&#x27; specifications. That&#x27;s tedious typesetting that wastes my time. I&#x27;m all for AI tools that help me produce my thoughts with as little friction as possible.<p>I&#x27;m not in favor of letting AI do my thinking for me. Time will tell where Prism sits.
      • flumpcakes21 hours ago
        This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it&#x27;s not, it&#x27;s writing papers for you. Scientists&#x2F;publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?
        • f2fff15 hours ago
          Nah this is necessary.<p>Lessons are learned the hard way. I invite the slop - the more the merrier. It will lead to a reduction in internet activity as people puke from the slop. And then we chart our way back to the right path.<p>It is what it is. Humans.
    • PlatoIsADisease21 hours ago
      I just want replication in science. I don&#x27;t care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.<p>Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.
  • unixzii13 hours ago
    It may be useful, but it also encourages people to stop writing their own papers.
    • mves12 hours ago
      As they demo in the video, it even encourages people to actually skip doing the research (which includes <i>reading</i> both relevant AND not-so-relevant papers in order to explore!) Instead, prompt &quot;cite some relevant papers, please&quot;, and done. Hours of actual reading, thinking, and exploration reduced to a minimum.<p>A couple of generations of students later, and these will be rare skills: information finding, actual thinking, and conveying complex information in writing.
  • zmmmmm18 hours ago
    They compare it to software development but there is such a crucial difference to software development: by and large, software is an order of magnitude easier to verify than it is to create. By comparison, reviewing a vibe generated manuscript will be MUCH more work to verify than a piece of software with equivalent complexity. On top of that, review of academic literature is largely outsourced to the academic community for free. There is no model to support it that scales to an increased volume of output.<p>I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.
  • Onavo21 hours ago
    It would be interesting to see how they would compete with the incumbents like<p><a href="https:&#x2F;&#x2F;Elicit.com" rel="nofollow">https:&#x2F;&#x2F;Elicit.com</a><p><a href="https:&#x2F;&#x2F;Consensus.app" rel="nofollow">https:&#x2F;&#x2F;Consensus.app</a><p><a href="https:&#x2F;&#x2F;Scite.ai" rel="nofollow">https:&#x2F;&#x2F;Scite.ai</a><p><a href="https:&#x2F;&#x2F;Scispace.com" rel="nofollow">https:&#x2F;&#x2F;Scispace.com</a><p><a href="https:&#x2F;&#x2F;Scienceos.ai" rel="nofollow">https:&#x2F;&#x2F;Scienceos.ai</a><p><a href="https:&#x2F;&#x2F;Undermind.ai">https:&#x2F;&#x2F;Undermind.ai</a><p>Lots of players in this space.
  • dash29 hours ago
    “LaTeX-native“<p>Oh NO. We will be stuck in LaTeX hell forever.
  • asadm21 hours ago
    Disappointing actually, what I actually need is a research &quot;management&quot; tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.
  • Min0taurr3 hours ago
    Dog turd, will be used to mine research data and train some sort of research AI model, do not trust. I would much rather support Overleaf which is made by academics for academics than some vibe coded alternative with deep data mining. No wonder we have so much slop in research at the moment
  • noahbp21 hours ago
    They seem to have copied Cursor in hijacking ⌘Y shortcut for &quot;Yes&quot; instead of Undo.
    • drusepth18 hours ago
      In what applications is ⌘Y Undo and not ⌘Z? Is ⌘Y just a redundant alternative?
      • zerocrates12 hours ago
        Ctrl-Y is typically Redo, not Undo. Maybe that&#x27;s what they meant.<p>Apparently on Macs it&#x27;s usually Command-Shift-Z?
  • legitster23 hours ago
    It&#x27;s interesting how quickly the quest for the &quot;Everything AI&quot; has shifted. It&#x27;s much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.<p>I&#x27;ve noticed this already with Claude. Claude is so good at code and technical questions... but frankly it&#x27;s unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.<p>All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI&#x2F;LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
    • Otterly998 hours ago
      I completely agree.<p>In my lab, we have been struggling with automated image segmentation for years. 3 years ago, I started learning ML and the task is pretty standard, so there are a lot of solution.<p>In 3 months, I managed to get a working solution, which only took a lot of sweat annotating images first.<p>I think this is where tools like OpenCode really shine, because they unlock the potential for any user to generate a solution to their specific problem.
    • falcor8421 hours ago
      I don&#x27;t get this argument. Our nervous system is also heterogenous, why wouldn&#x27;t AGI be based on an &quot;executive functions&quot; AI that manages per-function AIs?
  • ai_critic23 hours ago
    Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like &quot;find me more papers I should read and consider&quot;, but &quot;find papers that are relevant that I should cite--okay, just add those&quot;.<p>This is all pageantry.
    • sfink22 hours ago
      Yes. That part of the video was straight-up &quot;here&#x27;s how to automate academic fraud&quot;. Those papers could just as easily negate one of your assumptions. What even <i>is</i> research if it&#x27;s not <i>using</i> cited works?<p>&quot;I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here&#x27;s my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn&#x27;t know, I haven&#x27;t read up on it yet. Too many papers to write.&quot;
    • renyicircle22 hours ago
      It&#x27;s as if it&#x27;s marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor&#x27;s thesis. Bibliography and proper citation requirements are a pain.
      • pfisherman22 hours ago
        That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.
      • olivia-banks22 hours ago
        I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.
    • olivia-banks23 hours ago
      I&#x27;ve noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting <i>any</i> sort of review or research paper.<p>We removed the authorship of a a former co-author on a paper I&#x27;m on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
      • NewsaHackO22 hours ago
        There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time&#x2F;funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.
    • black_puppydog22 hours ago
      Plus, this practice (just inserting AI-proposed citations&#x2F;sources) is what has recently been the front-runner of some very embarrassing &quot;editing&quot; mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! &lt;3
    • verdverm22 hours ago
      It&#x27;s all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
    • adverbly22 hours ago
      I chuckled at that part too!<p>Didn&#x27;t even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.
    • maxkfranz21 hours ago
      A more apt example would have been to show finding a particular paper you want to cite, but you don’t want to be bothered searching your reference manager or Google Scholar.<p>E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”
    • teaearlgraycold22 hours ago
      The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it&#x27;s actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you&#x27;ve proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
    • thesuitonym22 hours ago
      You may notice that this is the way writing papers works in undergraduate courses. It&#x27;s just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they&#x27;re experts.
  • chaosprint20 hours ago
    As a researcher who has to use LaTeX, I used to use Overleaf, but lately I&#x27;ve been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won&#x27;t subscribe to ChatGPT.
  • andrepd21 hours ago
    &quot;Chatgpt writes scientific papers&quot; is somehow being advertised as a good thing. What is there even left to say?
  • delduca20 hours ago
    First 5 seconds reading and I have spotted that was written by AI.
    • drusepth18 hours ago
      We human writers love emdashes also ;)
  • hit8run22 hours ago
    They are really desperate now, right?
  • 0dayman23 hours ago
    in the end we&#x27;re going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out
    • falcor8421 hours ago
      You&#x27;re assuming a world where humans are still needed to read the papers. I&#x27;m more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.
      • drusepth18 hours ago
        Why are you worried about that world? Is it because you expect science to progress too fast, or too slow?
        • falcor8417 hours ago
          Too fast. It&#x27;s already coding too fast for us to follow, and from what I hear, it&#x27;s doing incredible work in drug discovery. I don&#x27;t see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.
  • postatic16 hours ago
    ok I don&#x27;t care what people say, this would&#x27;ve helped me a lot during my PhD days fighting with LateX and diagrams. :)
  • hulitu23 hours ago
    &gt; Introducing Prism Accelerating science writing and collaboration with AI.<p>I thought this was introduced by the NSA some time ago.
    • webdoodle17 hours ago
      Lol, yep. Now with enhanced A.I. terrorist tracking...<p>Fuck A.I. and the collaborators creating it. They&#x27;ve sold out the human race.
  • random_duck1 hour ago
    &quot;Science&quot;
  • oytmeal21 hours ago
    Some things are worth doing the &quot;hard way&quot;.
    • falcor8421 hours ago
      Reminds me of that dystopian virtual sex scene in Demolition Man (slightly nsfw) - <a href="https:&#x2F;&#x2F;youtu.be&#x2F;E3yARIfDJrY" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;E3yARIfDJrY</a>
  • wasmainiac21 hours ago
    The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window
    • falcor8421 hours ago
      That&#x27;s one scenario, but I also see a potential scenario where this integration makes it easier to manage the full &quot;chain of evidence&quot; for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.<p>At the end of the day, it&#x27;s all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?
      • wasmainiac9 hours ago
        Possibly, but 1 I am concerned that the current LLM AI is not thinking critically, just auto completing in a way that looks like thinking. 2 current AI rollout is incentivised for market capture not honest work.
  • mkl10 hours ago
    &gt; Turn whiteboard equations or diagrams directly into LaTeX, saving hours of time manipulating graphics pixel-by-pixel<p>What a bizarre thing to say! I&#x27;m guessing it&#x27;s slop. Makes it hard to trust anything the article claims.
  • AlexCoventry22 hours ago
    I don&#x27;t see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?
  • egorfine7 hours ago
    &gt; Chat with GPT‑5.2<p>&gt; Draft and revise papers with the full document as context<p>&gt; ...<p>And pay the finder&#x27;s fee on every discovery worth pursuing.<p>Yeah, immediately fuck that.
  • rcastellotti8 hours ago
    wow, this is useless!
  • BizarroLand19 hours ago
    <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;A_Mind_Forever_Voyaging" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;A_Mind_Forever_Voyaging</a><p>In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world&#x27;s first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.
  • zb320 hours ago
    Is this the product where OpenAI will (soon) take profit share from inventions made there?
  • pigeons21 hours ago
    Naming things is hard.
  • i2km13 hours ago
    LaTeX was one of the last bastions against AI slop. Sadly it&#x27;s now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?
  • preommr23 hours ago
    Very underwhelming.<p>Was this not already possible in the web ui or through a vscode-like editor?
    • vicapow23 hours ago
      Yes, but there&#x27;s a really large number of users who don&#x27;t want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn&#x27;t have to become a full stack software engineer to be able to write a research paper in LaTeX.
  • divan20 hours ago
    No Typst support?
  • camillomiller17 hours ago
    Given what Prism was at the NSA, why the hell would any tech company greenlight this name?
  • jackblemming19 hours ago
    There is zero chance this is worth billions of dollars, let alone the trillion$ OpenAI desparately needs. Why are they wasting time with this kind of stuff? Each of their employees needs to generate insane amounts of money to justify their salaries and equity and I doubt this is it.
    • fuzzfactor3 hours ago
      Some employees are just worth having around whether or not they are directly engaged in making billions of dollars every single minute with every single task.<p>A good salesman could make money off of people who can do this, even if this is free they can always pull more than their weight with other efforts, and that can be in a more natually lucrative niche.
  • soulofmischief20 hours ago
    I understand the collaborative aspects, but I wonder how this is going to compare to my current workflow of just working with LaTeX files in my IDE and using whichever model provider I like. I already have a good workflow and modern models do just fine generating and previewing LaTeX with existing toolchains.<p>Of course, my scientific and mathematical research is done in isolation, so I&#x27;m not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We&#x27;re going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.
  • AndrewKemendo21 hours ago
    I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.<p>As other top level posters have indicated the review portion of this is the limiting factor<p>unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.<p>So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.<p>I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.<p>If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market
    • f2fff15 hours ago
      &quot;So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.&quot;<p>Errr sure. Sounds easy when you write it down. I highly doubt such a thing will ever exist.
    • idontknowmuch18 hours ago
      If you think these types of tools are going to be generating &quot;the most and best research coming out of any lab&quot;, then I have to assume you aren&#x27;t actively doing any sort of research.<p>LLMs are undeniably great for interactive discussion with content IF you actually are up-to-date with the historical context of a field, the current &quot;state-of-the-art&quot;, and have, at least, a subjective opinion on the likely trajectories for future experimentation and innovation.<p>But, agents, at best, will just regurgitate ideas and experiments that have already been performed (by sampling from a model trained on most existing research literature), and, at worst, inundate the literature with slop that lacks relevant context, and, as a negative to LLMs, pollute future training data. As of now, I am leaning towards &quot;worst&quot; case.<p>And, just to help with the facts, your last comment is unfortunately quite inaccurate. Science is one of the best government investments. For every $1.00 dollar given to the NIH in the US, $2.56 of economic activity is estimated to be generated. Plus, science isn&#x27;t merely a public venture. The large tech labs have huge R&amp;D because the output from research can lead to exponential returns on investment.
      • f2fff15 hours ago
        &quot; then I have to assume you aren&#x27;t actively doing any sort of research.&quot;<p>I would wager hes not - he seems to post with a lot of bluster and links to some paper he wrote (that nobody cares about).
  • lispisok22 hours ago
    Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.
  • jsrozner22 hours ago
    AI: enshittifying everything you once cared about or relied upon<p>(re the decline of scientific integrity &#x2F; signal-to-noise ratio in science)
  • shevy-java23 hours ago
    &quot;Accelerating science writing and collaboration with AI&quot;<p>Uhm ... no.<p>I think we need to put an end to AI as it is currently used (not all of it but most of it).
    • drusepth23 hours ago
      Does &quot;as it is currently used&quot; include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?
    • Jaxan23 hours ago
      Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.
      • f2fff15 hours ago
        It seems people dont understand the basics...<p>We dont need more stuff - we need more quality and less of the shit stuff.<p>Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.
  • hahahahhaah20 hours ago
    Bringing slop to science.
  • mves12 hours ago
    Less thinking, reading, and reflection, and more spouting of text, yay! Just what we need.
  • geekamongus17 hours ago
    Fuck...there are already too many things called Prism.
  • lsh016 hours ago
    ... aaaand now it&#x27;s JATS.
  • lifetimerubyist19 hours ago
    As if there wasn&#x27;t enough AI slop in the scientific community already.
  • postalcoder1 day ago
    Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use &quot;prism&quot; for a product but, for me, the word is permanently tainted.
    • cheeseomlit1 day ago
      Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know &#x27;Snowden&#x27; but not &#x27;PRISM&#x27;. The amount of people who actually cared about the Snowden leaks is practically a rounding error
      • hedora23 hours ago
        Given current events, I think you’ll find many more people care in 2026 than did in 2024.<p>(See also: today’s WhatsApp whistleblower lawsuit.)
      • giancarlostoro21 hours ago
        Most people don&#x27;t care about the details. Neither does the media. I&#x27;ve seen national scandals that the media pushed one way disproven during discovery in a legal trial. People only remember headlines, the retractions are never re-published or remembered.
    • blitzar23 hours ago
      Guessing that Ai came up with the name based on the description of the product.<p>Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.
    • arthurcolle23 hours ago
      This was my first thought as well. Prism is a cool name, but I&#x27;d never ever use it for a technical product after those leaks, ever.
    • vjk80023 hours ago
      I&#x27;d think that most people in science would associate the name with an optical prism. A single large political event can&#x27;t override an everyday physical phenomenon in my head.
    • seanhunter1 day ago
      Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.
      • no-dr-onboard23 hours ago
        (plot twist: he works for NSA contractors)
        • seanhunter12 hours ago
          Hehe. You got me. Also “atlas” is another one. Pretty much everyone has a system somewhere called “atlas”.
    • kaonwarb1 day ago
      I suspect that name recognition for PRISM as a program is not high at the population level.
      • maqp22 hours ago
        2027: OpenAI Skynet - &quot;Robots help us everywhere, It&#x27;s coming to your door&quot;
        • willturman21 hours ago
          Skynet? C&#x27;mon. That would be too obvious - like naming a company <i>Palantir</i>.
    • dylan6041 day ago
      Surprised they didn&#x27;t do something trendy like Prizm or OpenPrism while keeping it closed source code.
    • songodongo1 day ago
      Or the JavaScript ORM.
    • moralestapia1 day ago
      I never though of that association, not in the slightest, until I read this comment.
    • locusofself1 day ago
      this was my first thought as well.
    • wilg1 day ago
      I followed the Snowden stuff fairly closely and forgot, so I bet they didn&#x27;t think about it at all and if they did they didn&#x27;t care and that was surely the right call.
  • maximgeorge23 hours ago
    [dead]
  • BLACKCRAB5 hours ago
    [dead]
  • verdverm22 hours ago
    I remember, something like a month ago, Altman twit&#x27;n that they were stopping all product work to focus on training. Was that written on water?<p>Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?