30 comments

  • ghm219913 hours ago
    &gt; Building a comparable one from scratch is like building a parallel national railroad..<p>Not too be pedantic here but I do have a noob question or two here:<p>1. One is building the index, which is a lot harder without a google offering its own API to boot. If other tech companies really wanted to break this monopoly, why can&#x27;t they just do it — like they did with LLM training for base models with the infamous &quot;pile&quot; dataset — because the upshot of offering this index for public good would break not just google&#x27;s own monopoly but also other monopolies like android, which will introduce a breath of fresh air into a myriad of UX(mobile devices, browsers, maps, security). So, why don&#x27;t they just do this already?<p>2. The other question is about &quot;control&quot;, which the DoJ has provided guidance for but not yet enforced. IANAL, but why can&#x27;t a state&#x27;s attorney general enforce this?
    • oh_fiddlesticks12 hours ago
      &gt; 1. One is building the index, which is a lot harder without a google offering its own API to boot. If other tech companies really wanted to break this monopoly, why can&#x27;t they just do it?<p>FTA:<p>&gt; Context matters: Google built its index by crawling the open web before robots.txt was a widespread norm, often over publishers’ objections. Today, publishers “consent” to Google’s crawling because the alternative - being invisible on a platform with 90% market share - is economically unacceptable. Google now enforces ToS and robots.txt against others from a position of monopoly power it accumulated without those constraints. The rules Google enforces today are not the rules it played by when building its dominance.
      • creato11 hours ago
        robots.txt was being enforced <i>in court</i> before google even existed, let alone before google got so huge:<p>&gt; The robots.txt played a role in the 1999 legal case of eBay v. Bidder&#x27;s Edge,[12] where eBay attempted to block a bot that did not comply with robots.txt, and in May 2000 a court ordered the company operating the bot to stop crawling eBay&#x27;s servers using any automatic means, by legal injunction on the basis of trespassing.[13][14][12] Bidder&#x27;s Edge appealed the ruling, but agreed in March 2001 to drop the appeal, pay an undisclosed amount to eBay, and stop accessing eBay&#x27;s auction information.[15][16]<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Robots.txt" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Robots.txt</a>
        • dragonwriter9 hours ago
          Not only was <i>eBay v. Bidder&#x27;s Edge</i> technically after Google existed, not before, more critically the slippery-slope interpretation of California trespass to chattels law the District Court relied on in it was considered and rejected by the California Supreme Court in <i>Intel v. Hamidi</i> (2003), and similar logic applied to other states trespass to chattels laws have been rejected by other courts since; <i>eBay v. Bidder&#x27;s Edge</i> was an early <i>aberration</i> in the application of the law, not something that established or reflected a lasting norm.
          • creato6 hours ago
            The point is, robots.txt was definitely a thing that people expected to be respected before and during google&#x27;s early existence. This Kagi claim seems to be at least partially false:<p>&gt; Google built its index by crawling the open web before robots.txt was a widespread norm, often over publishers’ objections.
            • hattmall2 hours ago
              Perhaps it wasn&#x27;t a widespread norm though. But I don&#x27;t really see why that matters as much, is the the issue that sites with robots.txt today only allow Googlebot and not other search engines? Or is Google somehow benefitting from having two decade old content that is now blocked because of robots.txt that the website operators don&#x27;t want indexed?
        • throw-the-towel10 hours ago
          Nitpick: Google incorporated in 1998, so, before the <i>Bidder&#x27;s Edge</i> case.
        • yuuxheu10 hours ago
          [flagged]
      • baggachipz11 hours ago
        A classic case of climbing the wall, and pulling the ladder up afterward. Others try to build their own ladder, and Google uses their deep pockets and political influence to knock the ladder over before it reaches the top.
        • dylan6049 hours ago
          Why does Google even need to know about your ladder? Build the bot, scale it up, save all the data, then release. You can now remove the ladder and obey robots.txt just like G. Just like G, once you have the data, you have the data.<p>Why would you tell G that you are doing something? Why tell a competitor your plans at all? Just launch your product when the product is ready. I know that&#x27;s anathema to SV startup logic, but in this case it&#x27;s good business
          • Nextgrid7 hours ago
            Running the bot nowadays is hard, because a lot of sites will now block you - not just by asking nicely via robots.txt, but by checking your actual source IP. Once they see it&#x27;s not Google, they send you a 403.
          • monooso7 hours ago
            Cost, presumably. From the article:<p>&gt; Microsoft spent roughly $100 billion over 20 years on Bing and still holds single-digit share. If Microsoft cannot close the gap, no startup can do it alone.
      • ghm219911 hours ago
        True. But the thing is if one says &quot;We will make sure your site is in a world wide freely availabled index&quot; which is kept fresh, google&#x27;s monopoly ship already begins to take on water. Here is a appropriate line from a completely different domain of rare earth metals from The Economist on the chinese govt&#x27;s weaponization of rare earths[1]:<p>&gt; Reducing its share from 90% to 80% may not sound like much, but it would imply a doubling in size of alternative sources of supply, giving China’s customers far more room for manoeuvre.<p>[1] <a href="https:&#x2F;&#x2F;archive.ph&#x2F;POkHZ#selection-1233.117-1233.302" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;POkHZ#selection-1233.117-1233.302</a>
    • jeromechoo10 hours ago
      Building an index is easy. Building a fresh index is extremely hard.<p>Ranking an index is hard. It&#x27;s not just BM25 or cosine similarity. How do you prioritize certain domains over others? How do you rank homepages that typically have no real content in them for navigational queries?<p>Changing the behavior of 90% of the non-Chinese internet is unraveling 25 years and billions of dollars spent on ensuring Google is the default and sometimes only option.<p>Historically, it takes a significant technological counter position or anti-trust breakup for a behemoth like Google to lose its footing. Unfortunately for us, Google is currently competing well in the only true technological threat to their existence to appear in decades.
      • AlienRobot8 hours ago
        Good news! Google doesn&#x27;t know how to rank pages either!
    • KellyCriterion12 hours ago
      Scraping is hard. Very good scraping is even harder. And today, being a scraping business is veeery difficult; there are some &quot;open&quot;&#x2F;public indices, but none of these other indices ever took off
      • ghm219911 hours ago
        Well sure yes, I don&#x27;t contend with the fact that its hard, but if the top tech companies joined their heads I am sure if for example, Meta, Apple, MS have enough talent between to make an open source index if only to reap gains from the de-monopolization of it all.
        • Nextgrid7 hours ago
          All these companies have the exact same business model as Google (advertising) and have the same mismatched incentives: <i>good</i> search results are not something they want.<p>Google Search sucks not because Google is incapable of filtering out spam and SEO slop (though they very much love that people believe they can&#x27;t), but that spam&#x2F;slop makes the ads on the SERP page more enticing, <i>and</i> some of the spam itself includes Google Ads&#x2F;analytics and benefits them there too.<p>There is no incentive for these companies to build a good search engine by themselves to begin with, let alone provide data to allow <i>others</i> to build one.
          • alex11382 hours ago
            I was on the Goog forums for years (before they even fucking ruined the FORMAT of the forums, possibly to &#x27;be more mobile friendly&#x27;) and it was people absolutely (justifiably) screaming at the product people<p>No, the customer isn&#x27;t &#x27;always&#x27; right, but these guys like to get big and once big, fuck you, we don&#x27;t have to listen to you, we&#x27;re big; what are you going to do, leave?
        • Imustaskforhelp11 hours ago
          I mean, doesn&#x27;t microsoft have bing?
          • ghm219911 hours ago
            Yeah but no one uses it. I am not even sure people that are forced to use it like using it because it was productized it pretty poorly. After all who wants another google? They invested 100 Billion dollars, which is a lot of wasted money TBH.<p>Search indexes are hard, surely, but if you were to strip it to just a good index on the browser, made it free, kept it fresh, it cannot be 100 billion dollars to build. Then you use this DoJ decision and fight against google to not deny a free index to have equal rights on chrome you can have a massive shot at a win for a LOT less money.
            • Imustaskforhelp11 hours ago
              &gt; Yeah but no one uses it. I am not even sure people like using it because it was productized it pretty poorly. They invested 100 Billion dollars, which is a lot of wasted money TBH.<p>I mean... Duckduckgo uses bing api iirc and I use duckduckgo and many people use duckduckgo.<p>I also used bing once because bing used to cache websites which weren&#x27;t available in wayback archive, I don&#x27;t know how but It was pretty cool solution for a problem.<p>I hate bing too and I am kind of interested in ecosia&#x2F;qwant&#x27;s future as well (yes there&#x27;s kagi too and good luck to kagi as well! but I am currently still staying on duckduckgo)
              • ghm219910 hours ago
                Duck duck go is really cool. I am almost fully rooting for them and they are my default mobile and web browser.<p>The small distributed team grinding it out against the goliath. They are awesome and perhaps the right example of what a path like this would look like. Maybe someone from their team can chime in on the difficulties of building a search engine that works in the face of tremendous odds.
              • dylan6049 hours ago
                I would imagine the users of DDG to be closer to a rounding error than an actual percentage of users. I&#x27;d imagine theGoog would love and hate to have 100%. They&#x27;d love it because all the data, and hate it as it would prove the monopoly. At the end of the day, the % that is not going to them probably doesn&#x27;t cause theGoog to lose much sleep
                • Imustaskforhelp9 hours ago
                  It&#x27;s just so wild how great Duckduckgo is &amp; how under-rated it is.<p>It&#x27;s available in all major browsers (Here in zen browser, it doesn&#x27;t even have a default browser but rather on the start page it asks between the three options, google duckduckgo and bing but yes if you press next it starts from google but zen can even start from ddg, its not such a big deal)<p>Duckduckgo is super amazing. I mean they are so amazing and their duck.ai or ai actually provides concise data instead of Google&#x27;s AI<p>DDG is leaps ahead of Google in terms of everything. I found Kagi to be pleasant too but with PPP it might make sense in Europe and America but privacy isn&#x27;t&#x2F; shouldn&#x27;t be the only who only pays. So DDG is great for me personally and I can&#x27;t recommend it enough for most cases.<p>Brave&#x2F;Startpage is a second but DDG is so good :)<p>It just works (for most cases, the only use case I use google is for uploading images to then get more images like this or use an image as a search query and I just do !gi and open images.google.com but I only use this function very rarely, bangs are amazing feature by ddg)
                  • dylan6048 hours ago
                    I use DDG myself. I just assumed that I&#x27;m not a very sophisticated user as I&#x27;ve never had it not serve my needs based on how other people here say it&#x27;s not very good.
      • renegat0x010 hours ago
        Scraping is hard, and is not hard that much at the same time. There are many projects about scraping, so with a few lines you can do implement scraper using curl cffi, or playwright.<p>People complain that user-agent need to be filled. Boo-hoo, are we on hacker news, or what? Can&#x27;t we just provide cookies, and user-agent? Not a big deal, right?<p>I myself have implemented a simple solution that is able to go through many hoops, and provide JSON response. Simple and easy [0].<p>On the other hand it was always an arms race. It will be. Eventually every content will be protected via walled gardens, there is no going around it.<p>Search engines affect me less, and less every day. I have my own small &quot;index&quot; &#x2F; &quot;bookmarks&quot; with many domains, github projects, youtube channels [1].<p>Since the database is so big, the most used by me places is extracted into simple and fast web page using SQLite table [2]. Scraping done right is not a problem.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;rumca-js&#x2F;crawler-buddy" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rumca-js&#x2F;crawler-buddy</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;rumca-js&#x2F;Internet-Places-Database" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rumca-js&#x2F;Internet-Places-Database</a><p>[2] <a href="https:&#x2F;&#x2F;rumca-js.github.io&#x2F;search" rel="nofollow">https:&#x2F;&#x2F;rumca-js.github.io&#x2F;search</a>
        • SyneRyder8 hours ago
          +1 so much for this. I have been doing the same, an SQLite database of my &quot;own personal internet&quot; of the sites I actually need. I use it as a tiny supplementary index for a metasearch engine I built for myself - which I actually did to replace Kagi.<p>Building a metasearch engine is not hard to do (especially with AI now). It&#x27;s so liberating when <i>you</i> control the ranking algorithm, and can supplement what the big engines provide as results with your own index of sites and pages that are important to you. I admit, my results &amp; speed aren&#x27;t as good as Kagi, but still good enough that my personal search engine has been my sole search engine for a year now.<p>If a site doesn&#x27;t want me to crawl them, that&#x27;s fine. I probably don&#x27;t need them. In practice it hasn&#x27;t gotten in the way as much as I might have thought it would. But I do still rely on Brave &#x2F; Mojeek &#x2F; Marginalia to do much of the heavy lifting for me.<p>I especially appreciate Marginalia for publicly documenting as much about building a search engine as they have: <a href="https:&#x2F;&#x2F;www.marginalia.nu&#x2F;log&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.marginalia.nu&#x2F;log&#x2F;</a>
        • visarga3 hours ago
          &gt; Search engines affect me less, and less every day. I have my own small &quot;index&quot; &#x2F; &quot;bookmarks&quot; with many domains, github projects, youtube channels<p>Exactly, why can&#x27;t we just hoard our bookmarks and a list of curated sources, say 1M or 10M small search stubs, and have a LLM direct the scraping operation?<p>The idea is to have starting points for a scraper, such as blogs, awesome lists, specialized search engines, news sites, docs, etc. On a given query the model only needs a few starting points to find fresh information. Hosting a few GB of compact search stubs could go a long way towards search independence.<p>This could mean replacing Google. You can even go fully local with local LLM + code sandbox + search stub index + scraper.
        • ghm21998 hours ago
          When I saw the Internet-Places-Database I thought it was an index on some sort of PoI and I got curious. But the personal internet spiel is pretty cool. One good addition to this could be the Foursquare PoI dataset for places search: <a href="https:&#x2F;&#x2F;opensource.foursquare.com&#x2F;os-places&#x2F;" rel="nofollow">https:&#x2F;&#x2F;opensource.foursquare.com&#x2F;os-places&#x2F;</a>
    • hamdingers12 hours ago
      &gt; If other tech companies really wanted to break this monopoly, why can&#x27;t they just do it<p>Google is a verb, nobody can compete with that level of mindshare.
      • observationist12 hours ago
        A big part of it is about the legal minefield if you presented any sort of real threat to Google. Nobody wants to wager billions in infrastructure and IP against Google or Apple or Microsoft, even if you could whip up a viable competing product in a weekend (for any given product.)<p>Part of it is also the ecosystem - don&#x27;t threaten adtech, because the wrong lawsuits, the wrong consumer trend, the wrong innovation that undercuts the entire adtech ecosystem means they lose their goose with the golden eggs.<p>Even if Kagi or some other company achieves legitimate mindshare in search, they still don&#x27;t have the infrastructure and ancillary products and cash reserves of Google, etc. The second they become a real &quot;threat&quot; in Google&#x27;s eyes, they&#x27;d start seeing lawsuits over IP and hostile and aggressive resource acquisitions to freeze out their expansion, arbitrary deranking in search results, possible heightened government audits and regulatory interactions, and so on. They have access to a shit ton of legal levers, not to mention the whole endless flood of dirty tricks money can buy (not that Google would ever do that.)<p>They&#x27;re institutional at this point; they&#x27;re only going away if&#x2F;when government decides to break it up and make things sane again.
      • wongarsu12 hours ago
        Xerox is a verb, but most copy machines I see are made by their competition
        • hamdingers12 hours ago
          Wonder why that could be?<p><a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;1975&#x2F;07&#x2F;31&#x2F;archives&#x2F;xerox-settlement-is-approved-by-ftc.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;1975&#x2F;07&#x2F;31&#x2F;archives&#x2F;xerox-settlement...</a>
      • eikenberry12 hours ago
        Kleenex isn&#x27;t the only brand of tissues sold in stores.
      • iamacyborg8 hours ago
        How’s that working out for Hoover in the UK?
      • cowsandmilk10 hours ago
        Licensing their index doesn’t change that.
      • Zyst12 hours ago
        So were AOL, and Skype
        • dylan6049 hours ago
          I don&#x27;t ever recall anyone using AOL as a verb. How would you do that?
    • walls12 hours ago
      A huge amount of the web is only crawlable with a googlebot user-agent and specific source IPs.
      • hattmall2 hours ago
        Are these websites not serving public content? If there&#x27;s some legal concerns just create a separate scraping LLC that fakes user agent and uses residential IPs or VPN or something. I can&#x27;t imagine that the companies would follow through with some sort of lawsuit against a scraper that&#x27;s trying to index their site to get them more visitors, if they allow GoogleBot.
      • Imustaskforhelp11 hours ago
        &gt; And given you-know-what, the battle to establish a new search crawler will be harder than ever. Crawlers are now presumed guilty of scraping for AI services until proven innocent.<p>I have always wondered but how does wayback machine work, is there no way that we can use wayback archive and then run a index on top of every wayback archive somehow?
        • ghm219910 hours ago
          You can read <a href="https:&#x2F;&#x2F;hackernoon.com&#x2F;the-long-now-of-the-web-inside-the-internet-archives-fight-against-forgetting" rel="nofollow">https:&#x2F;&#x2F;hackernoon.com&#x2F;the-long-now-of-the-web-inside-the-in...</a> it was a nice look into their infra structure. One could theoretically build it. A few things stand out:<p>1. IIUC depends a lot on &quot;Save Page Now&quot; democratization, which could work, but its not like a crawler.<p>2. In absence of alexa they depend quite heavily on common crawl, which is quite crazy because there literally is no other place to go. I don&#x27;t think they can use google&#x27;s syndicated API, cause they would then start showing ads in their database, which is garbage that would strain their tiny storage budget.<p>3. Minor from a software engineering perspective but important for survival of the company: since they are an artifact of record storage, to convert that to an index would need a good legal team to battle google to argue. They do that the DoJ&#x27;s recent ruling in their favor.
      • deepsquirrelnet11 hours ago
        I do not know a lot about this subject, but couldn’t you make a pretty decent index off of common crawl? It seems to me the bar is so low you wouldn’t have to have everything. Especially if your goal was not monetization with ads.
        • ghm219911 hours ago
          I think someone had commented on another thread about SerpAPI the other day that common crawl is quite small. It would be a start, I think the key to a good index people will use is freshness of the results. You need good recall for a search engine, precision tuning&#x2F;re-ranking is not going to help otherwise.
      • charcircuit10 hours ago
        If a crawler offered enough money they could be allowed too. It&#x27;s not like Google has exclusive crawling rights.
        • Nextgrid7 hours ago
          There is a logistics problem here - even if you <i>had</i> enough money to pay, how would you get in touch with every single site to even let them know you&#x27;re happy to pay? It&#x27;s not like site operators routinely scan their error logs to see your failed crawling attempts and your offer in the user-agent.<p>Even if they see it, it&#x27;s a classic chicken &amp; egg problem: it&#x27;s not worth the time of the site operator to engage with your offer until your search engine popular enough to matter, but your search engine will never become popular enough to matter if it doesn&#x27;t have a critical mass of sites to begin with.
          • charcircuit4 hours ago
            Realistically you don&#x27;t need every single site on board before you index becomes valuable. You can get in touch with sites via social media, email, discord, or even visiting them face to face.
            • stavros55 minutes ago
              You really do need every single site, as search is a long tail problem. All the interesting stuff is in the fringes, if you only have a few big sites you&#x27;ll have a search engine of spam.
    • hsuduebc213 hours ago
      I don’t think it’s comparable to today’s AI race.<p>Google has a monopoly, an entrenched customer base, and stable revenue from a proven business model. Anyone trying to compete would have to pour massive money into infrastructure and then fight Google for users. In that game, Google already won.<p>The current AI landscape is different. Multiple players are competing in an emerging field with an uncertain business model. We’re still in the phase of building better products, where companies started from more similar footing and aren’t primarily battling for customers yet. In that context, investing heavily in the core technology can still make financial sense. A better comparison might be the early days of car makers, or the web browser wars before the market settled.
      • ghm219911 hours ago
        &gt; ... stable revenue from a proven business mode... In that game, Google already won.<p>But if they were to pour that money strategically to capture market share one of two things would happen if google was replaced&#x2F;lost share:<p>1. it would be the start of the commoditization of search. i.e. search engine&#x2F;index would become a commodity and more specialized and people could buy what they want and compete.<p>2. A new large tech company takes rein. In which case it would be as bad as this time.<p>Like what I don&#x27;t get is that if other big tech companies actually broke apart monopoly on search, several google dominos in mobile devices, browser tech, location capabilities would fall. It would be a massive injection of new competition into the economy, lots of people would spend more dollars across the space(and ad driven buying too) money would not accrue in an offshore tax haven in ireland<p>To play the devils advocate, I think the only reason its not happening is because meta, apple, microsoft have very different moats&#x2F;business models to profit off. They all have been stung one time or another is small or big ways for trying to build something that could compete but failed. MS with bing, Meta with facebook search, Foursquare — not big tech but still — with Maurauder&#x27;s Map.
    • citizenpaul7 hours ago
      &gt;why can&#x27;t they just do it<p>Money. Google controls 99% of the adverting market. That&#x27;s why its called a monopoly. No one else can compete because they can never make enough money to make it worth the costs of doing it themselves.
    • xnx12 hours ago
      &gt; If other tech companies really wanted to break this monopoly, why can&#x27;t they just do it<p>Companies would rather sue than try and compete by investing their own money.
    • paxys12 hours ago
      Apple had a chance to break Google&#x27;s search monopoly, but they chose to take billions from them instead.<p>Microsoft had a chance (well another chance, after they gave up IE&#x27;s lead) to break up Google&#x27;s browser monopoly, but they decided to use Chromium for free instead.<p>Ultimately all these decisions come down to what&#x27;s more profitable, not what&#x27;s in the best interests of the public. We have learned this lesson x1000000. Stop relying on corporations to uphold freedoms (software or otherwise), becuase that simply isn&#x27;t going to happen.
      • charcircuit10 hours ago
        &gt;but they chose to take billions from them instead.<p>They chose to use Google with a revenue sharing agreement. Google is very well monetized. It would be very difficult for Apple to monetize their own search as good as Google can.<p>&gt;they decided to use Chromium<p>Windows ships with Microsoft Edge as the browser which Microsoft has full control over.
  • WhyNotHugo13 hours ago
    The statistics in this article sound like garbage to me.<p>Google used by 90% or the world?<p>~20% of the human population lives in countries where Google is blocked.<p>OTOH, Baidu is the #1 search engine in China, which has over 15% of the world’s population… but doesn’t reach 1%?<p>These stats are made measuring US-based traffic, rather than “worldwide” as they claim.
    • weisnobody12 hours ago
      Yes the stats don&#x27;t make sense. It appears to be an issue with StatsCounter.<p>The Search Engine wikipedia article [1] has a section on Russia and East Asia market share, which confirms that the roll up used for world wide counts is off, unless the number of people using the Internet is drastically different in some of the countries.<p>Russia<p><pre><code> * Yandex: 70.7% * Google: 23.3% </code></pre> China:<p><pre><code> * Baidu: 59.3% * Other domestic engines: &quot;smaller shares&quot; * Bing: 13.6% </code></pre> South Korea:<p><pre><code> * Naver: 59.8% * Google: 35.4% </code></pre> Japan: * Google: 76.2% * Yahoo! Japan: 15.8%<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Search_engine#Market_share" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Search_engine#Market_share</a>
      • dylan6049 hours ago
        Maybe it&#x27;s the same logic that says you can lower the prices of things &gt;100%
    • lolc13 hours ago
      I guess they&#x27;d argue that the people in China don&#x27;t count, because people in China don&#x27;t get to choose Google. But yeah, the stats they use from &quot;StatCounter&quot; are clearly not representative for what the world uses.
      • manquer4 hours ago
        Market share is based on factual consumption numbers however subsidized or regulated by a government not free will.<p>Choice&#x2F;Free will is an arbitrary line in the sand, one could argue how much choice we have about consuming google search when it is &quot;85-90&quot;% monopolistic business with well documented anti-competitive practices.<p>Chinese consumers perhaps have more choice than we do, Baidu is only about 60% market share. They do get to choose, it more that Google is not one of the options available to them, it is not like if not Baidu then it is a Phone Book.
      • elAhmo12 hours ago
        You can argue that people outside of China don&#x27;t get to choose something other than Google. Sure, there are recent pushes with default search engine choices and similar initiatives, but there is a reason why Google is paying hundreds of millions of dollars to be the default search engine.
        • tyre4 hours ago
          It’s reasonable to see a distinction between the great firewall and the default browser search engine
    • ivanjermakov11 hours ago
      To be fair, Kagi won&#x27;t be used in China either.
    • 0x1ch13 hours ago
      Google is only blocked in places where it would already be hard for a company with morals to work in, if not outright blocked as well. This probably represents traffic globally, excluding those places.<p>Instead of downvoting blindly, please state which countries are currently blocking Google that would willingly allow Kagi, a AI&#x2F;Privacy focused search engine company to exist in their domain? The results may surprise you!
      • PunchTornado4 minutes ago
        there is no where in the article where they mention this (not even an * saying what they exclude).<p>they present numbers and say &quot;world&quot; like whole countries and groups of people don&#x27;t matter. very arrogant.
      • direwolf2011 hours ago
        Google is not blocked in the USA.
        • 0x1ch10 hours ago
          Interesting. I&#x27;m in the US and use Kagi everyday.
          • dylan6049 hours ago
            I read it more as &quot;company having morals&quot;. Not many US companies have &quot;morals&quot;.
            • 0x1ch9 hours ago
              Google doesn&#x27;t, Kagi seems to (hopefully). I meant this more as a jab at countries willing to block Google, as they&#x27;re generally dictatorships &#x2F; authoritarian in nature. Oh the irony, as an american saying this in 2026....
  • pfist11 hours ago
    I am rooting for Kagi here, and I applaud their transparency on such matters. It is quite enlightening for someone like me who understands technology but knows little about the inner workings of search.<p>It remains to be seen how or if the remedies will be enforced, and, of course, how Google will choose to comply with them. I am not optimistic, but at least there is some hope.<p>As an aside: The 1998 white paper by Brin and Page is remarkable to read knowing what Google has become.
    • m-schuetz48 minutes ago
      I&#x27;m rooting for Kagi solely because the block feature. It&#x27;s amazing to be able to block undeservedly SEO&#x27;d garbage sites from future search results.
  • xnx14 hours ago
    &gt; Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results<p>Crazy for a company to admit: &quot;Google won&#x27;t let us whitelabel their core product so we steal it and resell it.&quot;
    • eli13 hours ago
      Seems like an open question as to whether that violates any laws.<p>Another way to look at it is that if you publish a service on the web, you have limited rights to restrict what people do with it.<p>Isn&#x27;t that the logic Google search relies on in the first place? I didn&#x27;t give permission for Google to crawl and index and deep link to my site (let alone summarize and train LLMs on it). They just did it anyway, because it&#x27;s on a public website.
      • malfist10 hours ago
        Google&#x27;s stance is &quot;I can copy you and you can&#x27;t stop me&quot; as well as &quot;You can&#x27;t copy me, I&#x27;ll sue you&quot;
        • GuB-424 hours ago
          Maybe it has changed but Google doesn&#x27;t look like it uses litigation as its primary weapon. It defends itself but rarely attacks.<p>The are however more than happy to use technical measures, like blocking accounts. And because of their position, blocking your Google account may be more damaging than a successful lawsuit.
    • techjamie13 hours ago
      What&#x27;s the alternative? Building a competing search index as a relative nobody on the web is very difficult, from the outset, and is made more difficult from sites taking extra measures to stop bots in general now.<p>Google&#x27;s crawler is given special privileges in this right and can bypass basically all bot checks. Anyone else has to just wade through the mud and accept they can&#x27;t index much of the web.
    • direwolf2014 hours ago
      Pretty standard business practice though. There&#x27;s no ethics in making money.
    • roywiggins11 hours ago
      Is it much different than what Google AI Summaries do?
    • timeon11 hours ago
      Even the article posted (and search itself) has Google IP address.
    • shadowgovt13 hours ago
      But in this current climate, they can admit it and then dare Google to tell them to stop... After Google has just had an antitrust ruling against it for dominating the search market.<p>Google doesn&#x27;t really have a leg to stand on and they know it.
    • Ar-Curunir13 hours ago
      Strange to pick on Kagi when there&#x27;s much bigger companies on that list.
      • xnx12 hours ago
        Those companies allegedly have used SerpAPI (probably to check visibility), but not to resell a Google Search knock-off.
        • manquer4 hours ago
          &gt; knock-off<p>Is it though? It feels so better than Google results[1], while being still built partly with Google results.<p>In the last 3 years as a Kagi customer i have rarely if ever felt the need to use bangs !g and on few occasions i did use them, it was with instant regret.<p>In the previous decade or so using DDG, using bangs !g Google would be 30-50% of searches, i would have to consciously try the results first instead of starting with !g and then think to myself DDG was at least getting the query data to improve their results.<p>[1] While the de-cluttered UI is a relief, on just the results list comparison, Google search is so bad that less time saved in not redrafting the queries constantly, filtering out the spam, the AI summaries, sponsored content, all the &quot;cards&quot; , recommended search listicles on is worth more than the $10&#x2F;month.
  • whs14 hours ago
    &gt;Google: Google does not offer a public search API. The only available path is an ad-syndication bundle with no changes to result presentation - the model Startpage uses. Ad syndication is a non-starter for Kagi’s ad-free subscription model.[^1]<p>&gt;Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page). These providers serve major enterprises (according to their websites) including Nvidia, Adobe, Samsung, Stanford, DeepMind, Uber, and the United Nations.<p>The customer list matches what is listed on SerpAPI&#x27;s page (interestingly, DeepMind is on Kagi&#x27;s list while they&#x27;re a Google company...). I suppose Kagi needs to pen this because if SerpAPI shuts down they may lose access to Google, but they may already have utilize multiple providers. In the past, Kagi employees have said that they have access to Google API, but it seems that it was not the case?<p>As a customer, the major implication of this is that even if Kagi&#x27;s privacy policy says they try to not log your queries, it is sent to Google and still subject to Google&#x27;s consumer privacy policy. Even if it is anonymized, your queries can still end up contributing to Google Trends.
  • ajdude13 hours ago
    Does anyone else use the phrase &quot;I&#x27;m going to google XYZ&quot; while referring to actually searching it up on Kagi, DDG, or another search engine?
    • eli13 hours ago
      Ironically this is a bad thing for Google from a legal standpoint. If a term becomes &quot;genericized&quot; then it can lose trademark protection.<p>&quot;Aspirin&quot; is a famous example. It used to be a brand name for acetylsalicylic acid medication, but became such a common way to refer to it that in the US any company can now use it.
      • 1-more13 hours ago
        Apparently the &quot;lost in the Treaty of Versailles&quot; explanation is a bit of a just-so story: <a href="https:&#x2F;&#x2F;history.stackexchange.com&#x2F;questions&#x2F;55729&#x2F;why-did-bayer-lose-aspirin-and-heroin-trademarks-under-the-1919-treaty-of-versai" rel="nofollow">https:&#x2F;&#x2F;history.stackexchange.com&#x2F;questions&#x2F;55729&#x2F;why-did-ba...</a>
    • shervinafshar13 hours ago
      I&#x27;ve been using Kagi for the past few years, but I try to use a brand-agnostic language talking about web search; e.g. &quot;I&#x27;m gonna search [the web] for it&quot;; &quot;Use your favorite search engine to look it up&quot;.
    • jeremyjh13 hours ago
      Yes, it’s like Xerox or Kleenex except it’s actually still a monopoly. In a happy Kagi user but I know hardly anyone else is.
    • dooglius13 hours ago
      Yeah, I don&#x27;t feel the need to have conversations go on a tangent about explaining what Kagi is
    • kqr13 hours ago
      I used to. Even when I actually used DDG. Now that I use Kagi (and thus am on the second web search service after I stopped using Google) it started to feel silly so I say &quot;search the web&quot; these days.
    • pixl9713 hours ago
      Yes, but more in the past than now, simply because almost everybody seems to use google itself.<p>For example I&#x27;d hear people say &quot;I&#x27;ll Google that&quot;, then use Yahoo when they were still a major search engine.
    • dijksterhuis13 hours ago
      nope, i say “i’m going to search for XYZ” or similar
    • bronson12 hours ago
      Now my family usually says &quot;I&#x27;m going to ask AI.&quot;
    • 2019847 hours ago
      I do
    • matkoniecz12 hours ago
      yes, me
    • chroma20513 hours ago
      &gt; Does anyone else use the phrase &quot;I&#x27;m going to google XYZ&quot; while referring to actually searching it up on Kagi, DDG, or another search engine?<p>Not me. I only use Google.<p>Never used Kagi or DDG. Don’t care enough.
  • ApolloFortyNine11 hours ago
    With Google&#x27;s search engine making almost $200 billion a year in revenue, I&#x27;m not sure Kagi could afford what market rates would be here. They also spent billions developing the technology to crawl, index, and rank billions of pages, factoring that in, again I don&#x27;t think a good price can be put on it.<p>What even is market rate? Kagi themselves admits there&#x27;s no market, the one competitor quit providing the service.<p>Obviously Google doesn&#x27;t want to become an index provider.
    • dangoor10 hours ago
      According to the article, the judge&#x27;s memorandum said about index data access:<p>&gt; Google must provide Web Search Index data (URLs, crawl metadata, spam scores) at marginal cost.<p>I&#x27;m guessing that the &quot;marginal cost&quot; of a search is small and it&#x27;s not connected to the how much ad revenue that search is worth.
  • sabslikesobs12 hours ago
    I like that there&#x27;s a list of primary sources at the bottom.<p>Kagi&#x27;s AI assistant has been satisfying compared to Claude and ChatGPT, both of which insisted on having a personality no matter what my instructions said. Trying to do well-sourced research always pissed me off. With Kagi it gives me a summary of sources it&#x27;s found and that&#x27;s it!
  • 1vuio0pswjnm78 hours ago
    Google has appealed and moved for a partial stay re: the remedies discussed in this blog post<p><a href="https:&#x2F;&#x2F;storage.courtlistener.com&#x2F;recap&#x2F;gov.uscourts.dcd.223205&#x2F;gov.uscourts.dcd.223205.1471.0.pdf" rel="nofollow">https:&#x2F;&#x2F;storage.courtlistener.com&#x2F;recap&#x2F;gov.uscourts.dcd.223...</a><p>Will Kagi file an amicus brief in support of the plaintiffs<p>Perhaps Google will fund amici in support of their position as they did in the Epic appeal<p><a href="https:&#x2F;&#x2F;www.law.com&#x2F;nationallawjournal&#x2F;2025&#x2F;01&#x2F;10&#x2F;fight-over-amicus-funding-disclosure-surfaces-in-google-play-appeal&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.law.com&#x2F;nationallawjournal&#x2F;2025&#x2F;01&#x2F;10&#x2F;fight-over...</a>
  • direwolf2014 hours ago
    I hope they cache search results to further reduce the number of calls to Google.<p>And Marginalia Search was not mentioned? Marginalia Search says they are licensing their index to Kagi. Perhaps it&#x27;s counted under &quot;Our own small-web index&quot; which is highly misleading if true.
    • z6412 hours ago
      There is a practical limit that we can&#x27;t cache results for too long; Search engine users are particularly sensitive to stale data, especially around current events. Without a holistic and realiable way to know when the cache ought to be invalidated, our caching is mostly focused on mitigating &quot;abuse&quot;, e.g., someone &#x2F; bunch of people spamming the same search in a short timespan; no sense in repeating all those upstream calls.<p>Most &quot;cost saving engineering&quot; is involved in finding cases&#x2F;hueristics where we only need to use a subset of sources and omitting calls in the first place, without compromising quality. For example, we probably don&#x27;t need to fire all of our sources to service a query like &quot;youtube&quot; or &quot;facebook&quot;.<p>Marginalia data is physically consolidated into the same infra that we use for small web results in our SERP, but also among other small scale sources besides those two. That line is simply referring directly to <a href="https:&#x2F;&#x2F;kagi.com&#x2F;smallweb" rel="nofollow">https:&#x2F;&#x2F;kagi.com&#x2F;smallweb</a> (<a href="https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb</a>).
      • AlienRobot8 hours ago
        To me, a lot of problems with &quot;building a search engine&quot; don&#x27;t seem to be problems with &quot;building a search engine,&quot; they seem to be problems with &quot;building a Google.&quot;<p>Nobody said a search engine needs to have fresh data, for example. Nor has anybody said a search engine needs to index the entire web. Yet these are two things every search engine tries to do, and then they usually fail to compare with Google.<p>To put it in another way, the reason why TikTok succeeded against Youtube is exactly because TikTok wasn&#x27;t trying to be a Youtube.
        • Nextgrid7 hours ago
          I don&#x27;t think TikTok &quot;succeeded&quot; compared to Youtube? TikTok succeeded in popularizing short-form video, but I&#x27;d argue that&#x27;s a different product. YouTube is still king for longform video.<p>While there might be arguments for building a <i>different</i> product (and LLM-based search like Perplexity is trying it), there appears to be enough demand for a &quot;good Google&quot; that Kagi is trying to address.
    • xnx12 hours ago
      &gt; &quot;Our own small-web index&quot;<p>Has Kagi ever said what this is? I wouldn&#x27;t be at all surprised if it is just kagi.com pages or a download of Wikipedia.
      • z6412 hours ago
        <a href="https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb</a>
    • packetlost14 hours ago
      The index is not necessarily the code, but the dataset. IMO it would be better to be more open about the technical stack, but I don&#x27;t think this feels dishonest to me.
  • adsharma6 hours ago
    Why didn&#x27;t I see anything about common crawl?<p>Exa, Parallel and a whole bunch of companies doing information retrieval under the &quot;agent memory&quot; category belong to this discussion.
  • senko10 hours ago
    A full up-to-date index of the searchable web should be a public commons good.<p>This would not only allow better competition in search, but fix the &quot;AI scrapers&quot; problem: No need to scrape if the data has already been scraped.<p>Crawling is technically a solved problem, as witnessed by everyone and their dog seemingly crawling everything. If pooled together, it would be cheaper and less resource intensive.<p>The secret sauce is in what happens afterwards, anyway.<p>Here&#x27;s the idea in more detail: <a href="https:&#x2F;&#x2F;senkorasic.com&#x2F;articles&#x2F;ai-scraper-tragedy-commons" rel="nofollow">https:&#x2F;&#x2F;senkorasic.com&#x2F;articles&#x2F;ai-scraper-tragedy-commons</a><p>I&#x27;m under no illusion something like that <i>will</i> happen .. but it <i>could</i>.
    • moebrowne9 hours ago
      Isn&#x27;t this what CommonCrawl are doing?<p><a href="https:&#x2F;&#x2F;commoncrawl.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;commoncrawl.org&#x2F;</a>
    • azornathogron9 hours ago
      Is crawling really solved?<p>Any naive crawler is going to run into the problem that servers can give different responses to different clients which means you can show the crawler something different to what you show real users. That turns crawling into an antagonistic problem where the crawler developers need to continually be on the lookout for new ways of servers doing malicious things that poison&#x2F;mislead the index.<p>Otherwise you&#x27;ll return junk spam results from spammers that lied to the crawler.<p>I&#x27;ve never done it so maybe it&#x27;s easier than I imagine but I wouldn&#x27;t be quick to assume that crawling is solved.
  • stacktraceyo5 hours ago
    Is there a crowd indexed style search index? Like instead of relying on the crawling completely you rely on a maybe like an extension in your browser that indexes as people are using their browser. Or maybe indexing your site to this index instead of waiting to be crawled.
  • jxmesth2 hours ago
    Honestly, would be very cool if someone could make a search engine of <i>only</i> human-produced content. I know it&#x27;s going to be hard and compute intensive but I don&#x27;t think it&#x27;s impossible. In fact, Google could do it. A paid service for only human made content. Obviously there would be a margin of error as we can never be 100% sure if something really is AI written.
    • HellsMaddy1 hour ago
      Kagi is doing something similar to this, though it&#x27;s not trying to remove absolutely all AI, just &quot;slop&quot;: <a href="https:&#x2F;&#x2F;help.kagi.com&#x2F;kagi&#x2F;features&#x2F;slopstop.html" rel="nofollow">https:&#x2F;&#x2F;help.kagi.com&#x2F;kagi&#x2F;features&#x2F;slopstop.html</a>
  • stephen_cagle12 hours ago
    One interesting point was the original PageRank algorithm greatly benefited from the fact that we kinda only had &quot;text matching&quot; search before Google (my memory was AltaVista at the time).<p>Because text matching was so difficult to search with, whenever you went to a site, it would often have a &quot;web of trust&quot; at the bottom where an actual human being had curated a list of other sites that you might like if you liked this site.<p>So you would often search with keywords (often literals), then find the first site, then recursively explore the web of trust links to find the best site.<p>My suspicion has always been that Google (PageRank) benefited greatly from the human curated &quot;web of trust&quot; at the bottom of pages. But once Google came out, search was much better, and so human beings stopped creating &quot;web of trust&quot; type things on their site.<p>I am making the point that Google effectively benefited from the large amount of human labor put into connecting sites via WOT, while simultaneously (inadvertently) destroying the benefit of curating a WOT. This means that by succeeding at what they did, they made it much more difficult for a Google#2 to come around and run the exact same game plan with even the exact same algorithm.<p>tldr; Google harvested the links that were originally curated by human labor, the incentive to create those links are gone now, so the only remaining &quot;links&quot; between things are now in the Google Index.<p>Addendum: I asked claude to help me think of a metaphor, and I really liked this one as it is so similar.<p>``` &quot;The railroad and the wagon trails&quot;<p>Before railroads, collective human use created and maintained wagon trails through difficult terrain. The railroad company could survey these trails to find optimal routes. Once the railroad exists, the wagon trails fall into disuse and the pathfinding knowledge atrophies. A second railroad can&#x27;t follow trails that are now overgrown. ```
    • keeda10 hours ago
      <i>&gt; I am making the point that Google effectively benefited from the large amount of human labor...</i><p>This is exactly right, but the thing most people miss is that Google has been using human intelligence at massive scale even <i>to this day</i> to improve their search results.<p>Basically, as people search and navigate the results, Google harvests their clicks, hovers, dwell-time and other browsing behavior to extract critical signals that help it &quot;learn&quot; which pages the users actually found useful for the given query. (Overly simplified: click on a link but click back within a minute to go to the next link -&gt; downrank, but spend more time on that link -&gt; uprank.)<p>This helps it rank results better and improve search overall, which keeps people coming back and excluding competitors. It&#x27;s like the web of trust again, except it&#x27;s clicks of trust, and it&#x27;s <i>only</i> visible to Google <i>and</i> is a never-ending self-reinforcing flywheel!<p>And if you look at the infrastructure Google has built to harvest this data, it is so much bigger than the massive index! They harvest data through Chrome, ad tracking, Android, Google Analytics, cookies (for which they built Gmail!), YouTube, Maps and so much more.<p>So to compete with Google Search, you don&#x27;t need just a massive index, you also need the extensive web infra footprint to harvest user interactions at massive scale, which means the most popular and widely deployed browser, mobile OS, ad tracking, analytics script, email provider, maps, etc, etc.<p>This also explains why Google spent so many billions in &quot;traffic acquisition costs&quot; (i.e. payments for being the Search default) every year, because that was a direct driver to both, 1) ad revenue, and 2) maintaining its search quality.<p>This wasn&#x27;t really a secret, but it (rightfully) turned out to be a major point in the recent Antitrust trial, which is why the proposed remedies (a TFA mentions) include the sharing of search index <i>and</i> &quot;interaction data.&quot;
  • keeda9 hours ago
    Google&#x27;s advantage is not just in its index and algorithms, it is that it has built a self-reinforcing flywheel that data mines <i>human attention</i> at massive scale to improve their search results.<p>This comment (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46709957">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46709957</a>) points out that Google got its start via PageRank, which essentially ranked sites based on links <i>created by humans</i>. As such, its primary heuristic was what humans thought was good content. Turns out, this is still how they operate.<p>Basically, as people search and navigate the results, Google harvests their clicks, hovers, dwell-time and other browsing behavior -- i.e. tracking what they pay <i>attention</i> to -- to extract critical signals to &quot;learn&quot; which pages the users actually found useful for the given query. This helps it rank results better and improve search overall, which keeps people coming back, which in turns gives them more queries and data, which improves their results... a never-ending flywheel.<p>And competitors have no hope of matching this, because if you look at the infrastructure Google has built to harvest this data, it is so much bigger than the massive index! They harvest data through Chrome, ad tracking, Android, Google Analytics, cookies (for which they built Gmail!), YouTube, Maps, and so much more. So to compete with Google Search, you don&#x27;t need just a massive index, you also need the extensive web infra footprint to harvest user interactions at massive scale, meaning the most popular and widely deployed browser, mobile OS, ad footprint, analytics, email provider, maps...<p>This also explains why Google spends so many billions in &quot;traffic acquisition costs&quot; (i.e. payments for being the Search default) every year, because that is a direct driver to both, 1) ad revenue, and 2) maintaining its search quality.<p>This wasn&#x27;t really a secret, but it turned out to be a major point in the recent Antitrust trial, which is why the proposed remedies (as TFA mentions) include the sharing of search index <i>and</i> &quot;interaction data.&quot;<p>We all knew &quot;if you&#x27;re not paying for it, you&#x27;re the product&quot; but the fascinating thing with Google is:<p>- They charge advertisers to monetize our attention;<p>- They harvest our attention to better rank results;<p>- They provide better results, which keeps us coming back, and giving them even more of our attention!<p>Attention is all you need, indeed.
    • Nextgrid7 hours ago
      &gt; &quot;learn&quot; which pages the users actually found useful for the given query<p>But due to their business model I&#x27;m not sure they are ranking &quot;usefulness&quot; as much as you think.<p>Useful results ultimately don&#x27;t benefit Google because Google makes no money on them. Google makes money on ads - either ads on the search results page, ads on the destination pages or (indirectly) from steering users to pages which have Google Analytics.<p>It&#x27;s likely the actual algorithm balances usefulness to the user with usefulness to Google. You don&#x27;t want to serve up exclusively spam&#x2F;slop as users might bounce, but you also don&#x27;t want to serve up the <i>best</i> result because the user will prefer <i>it</i> over the ad on the SRP page. So it has to be a mix of both - you&#x27;ll <i>eventually</i> get a good result, after many attempts (during which you&#x27;ve been exposed to ads).<p>Google does enjoy the myth that they are unable to combat spam&#x2F;slop while in reality they do profit off it.
      • keeda1 hour ago
        That is also the thesis of this piece: <a href="https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;the-men-who-killed-google&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;the-men-who-killed-google&#x2F;</a><p>It is plausible, but I&#x27;d guess Google would not risk that. I&#x27;m sure Google has pulled other shenanigans to get more clicks, like stuffing more and more ads, and making ads look like results (something even I personally have fallen for once), but I think they&#x27;re too smart to mess with their sacred cash cow.
  • yomismoaqui13 hours ago
    One thing I have discovered after using AI chats that include a websearch tool is that I don&#x27;t want to delve on diferent blogs, Medium posts, Stack overflow threads with passive-aggresive mod comments, dismissing cookie banners... Sorry I just want the info I&#x27;m looking for, I don&#x27;t care for your personal expression or need to monetize your content.<p>There are other times (usually not work related) when I want to explore the web and discovering some nice little blog or special corner on the net. This is what my RSS feed reader is for.
    • kqr13 hours ago
      With Kagi you can opt in to an LLM summary of the search result by appending a question mark to the query. It&#x27;s a neat mechanism when it works!
  • weisnobody12 hours ago
    I think the crawled data should have to be shared, but I&#x27;m not convinced that Google should have to share their index.<p>It may be impracticable to share the crawled data, but from the stand point of content providers, having a single entity collecting the information (rather than a bunch of people doing) would seem to be better for everyone. Likely need to have some form of robots.txt which would allow the content provider to indicate how their content could be used (i.e research, web search, AI, etc.).<p>The people accessing the crawled data would end up paying (reasonable) fees to access the level of data they want, and some portion of that fee would go to the content provider (30% to the crawler and 70% to the crawler? :P maybe).<p>Maybe even go so far as to allow the Paywalled content providers to set a price on accessing their data for the different purposes. Should they be allowed to pick and choose who within those types should be allowed (or have it be based on violations of the terms of access)<p>It seems in part the content providers have the following complaints:<p><pre><code> * Too many crawlers (see note below re crawlers) * Crawlers not being friendly * Improper use of the crawled data * Not getting compensated for their content </code></pre> Why not the index? The index, to me, is where a bunch of the &quot;magic&quot; happens and where individual companies could differentiate themselves from everyone else.<p>Why can&#x27;t Microsoft retain Bing traffic when it&#x27;s the default on stock Windows installs?<p><pre><code> * Do they not have enough crawled data? * Their index isn&#x27;t very good? * Their searching their index isn&#x27;t good * The way they present the data is bad? * Google is too entrenched? * Combination of the above? </code></pre> There are several entities intending to crawl all &#x2F; large portions of the Internet: Baidu, Bing, Brave, Google, DuckDuckGo, Gigablast, Mojeek, Sogou and Yandex [1]. That does not include any of the smaller entities, research projects, etc.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Search_engine#2000s–present:_Post_dot-com_bubble" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Search_engine#2000s–present:_P...</a> (2019)
  • jiehong9 hours ago
    I think one side problem is that part of the web is not even searchable with a search engine.<p>Here are some examples:<p>- Discord<p>- WeChat (is it the web?)<p>- Rednote<p>- TikTok (partially)<p>- X (partially)<p>- JSTOR (it finds daily, but you find more stuff on the website directly)<p>- any stuff with a login, obviously.
    • reddalo8 hours ago
      &gt; Discord<p>Damn, I can&#x27;t stand open-source projects that host their &quot;forums&quot; on Discord. It&#x27;s a nigthmare to use, it&#x27;s heavy, slow, and it&#x27;s completely unsearchable from the web.<p>I wonder what went wrong with our society.
      • cyberrock6 hours ago
        First of all not everyone wants spectators and gawkers on all of their conversations. As for open solutions, IRC didn&#x27;t provide chat history for the common folk (no, most users are not able to host their own Pi Zero bouncer, especially back in 2017), and Matrix development was too slow (Elements implemented message pinning in 2022), so the rest was history. There was just no alternative to Slack or Discord.
  • sharpshadow11 hours ago
    If Google provides a Search Index it will be the censored version therefore still politically acceptable. The “Layer 1” idea will not happen.
    • direwolf2011 hours ago
      That&#x27;s why Kagi combines results from multiple sources, just as it does with Yandex.
  • jeffbee13 hours ago
    &quot;We will simply access the index&quot; has always struck me as wild hand-waving that would instantly crumble at first contact with technical reality. &quot;At marginal cost&quot; is doing a huge amount of work in this article.
  • user393938213 hours ago
    For anyone not acquainted Kagi is excellent and the people who work there strike me as nice and competent. I’m a harsh critic usually. Highly recommended.
    • flkiwi12 hours ago
      I&#x27;ve gotten more value out of it than just about any ongoing subscription I have. It&#x27;s clean, fast, deeply customizable (i.e., excluding &quot;answers&quot; websites or any other domain you never want to see again), and, for what it is, inexpensive. Honestly if Google (or Bing) worked like Kagi does, I&#x27;d trade some of the privacy for the utility.
  • nige12313 hours ago
    The user data (anonymised) and analytics also needs to be shared.
  • the_arun13 hours ago
    If google is serving 90% traffic &amp; others are unable to enter - Doesn&#x27;t that mean google is doing something right for the customer and others are unable to outcompete it? Isn&#x27;t this how life works?
    • CGMthrowaway13 hours ago
      Google is allowed to be big, be better and win users. But happy customers is not the full test of monopolization. The real question is, &quot;Could a meaningfully better search engine realistically displace Google today?” If the answer is no, then competition is broken
      • xnx12 hours ago
        &gt; &quot;Could a meaningfully better search engine realistically displace Google today?”<p>ChatGPT clearly demonstrated that displacing Google is possible. All previous monopoly arguments seemed even more flimsy after that.
        • Nextgrid7 hours ago
          ChatGPT did not build a search engine though. They built something else (equally impressive) and <i>then</i> were able to use their weight to enter the web search business where most sites now <i>have</i> to allow them in.<p>While it&#x27;s good that building other products is possible, it doesn&#x27;t detract from the point that <i>search engines</i> are a de-facto monopoly.
        • b3kart11 hours ago
          I think you’re proving the monopoly argument yourself: if they only way to compete with Google is an innovation that generations of scientists have been working towards, it does paint a grim picture of competition in this space. Besides, are we ignoring Gemini?
          • charcircuit10 hours ago
            Google already used AI and language models before ChatGPT came out. If you wanted a state of the art search &#x2F; recommendation engine you needed that innovations from scientists already.
    • rafterydj13 hours ago
      This is a woefully naive view on the nature of monopolies. You could have made the same argument for Standard Oil.
    • hamdingers12 hours ago
      Is the user&#x27;s choice to use google a meaningful one when they&#x27;re effectively the only game in town?
    • giantrobot12 hours ago
      Google must be right for the <i>customer</i> because Google pays billions of dollars to be the default search engine for all the major browsers. And end users are <i>notorious</i> for changing application defaults.
    • soiltype13 hours ago
      ...No. Not at all. Not in the case of Google and generally that&#x27;s not &quot;how life works&quot;. If it <i>was</i> true, why would Google spend so much money to be the default search engine in so many devices&#x2F;browsers?
  • OGEnthusiast14 hours ago
    Sounds like we need a nationalized search engine company then?
    • browningstreet13 hours ago
      I wouldn&#x27;t trust a nationalized search engine company.<p>That said, there are projects like Common Crawl and in Europe, Ecosia + Qwant.<p>I personally would like to see a search enginge PaaS and a music streaming library PaaS that would let others hook up and pay direct usage fees.
      • NitpickLawyer13 hours ago
        &gt; and in Europe, Ecosia<p>I tried. It&#x27;s just not good enough. Quick example: yesterday I set up a workstation with Ubuntu, wanting to try out wayland. One of the things I wanted was to run an app (w&#x2F; gui) from another (unprivileged) user under my own user. Ecosia gave me bad old stuff. Tried for a few minutes, nothing useful. Switched to google, one of the first results was about waypipe. Searched waypipe on ecosia. 1 and a half pages of old content. Glaringly, not one of those results was the ubuntu.manpages entry on waypipe. <i>shrug</i>
      • shadowgovt13 hours ago
        An interoperable search index access standard might work. We&#x27;ve done something similar for peering and the backbone of the IP-layer interconnects themselves.
        • direwolf2011 hours ago
          You have to make it economically preferable, and there&#x27;s No known solution to this. Large networks are still using their positions to bully smaller ones off the IP-layer internet backbone.
  • hsuduebc213 hours ago
    It is even worse that the Google search become shit in last years. So they gate keep only relevant information for themselves and not using them with intent to improve search quality. As always if you have no competition your innovation goes only towards cost reduction. Not product improvement.
    • warkdarrior12 hours ago
      If Google Search is shit, why does Kagi want access to it?
      • JaggedJax12 hours ago
        They want access to the index. They will perform their own sorting to determine the best results to show from that index.
        • b3kart11 hours ago
          …without having advertiser interests to cater to.
  • WhereIsTheTruth13 hours ago
    Kagi&#x27;s &quot;waiting for dawn&quot; is just waiting for Google to legitimize their reseller business<p>Meanwhile, users pay a premium to pretend they&#x27;re not using Google<p>Fascinating delusion
    • b3kart12 hours ago
      &gt; Meanwhile, users pay a premium to pretend they&#x27;re not using Google<p>My searches can’t be tied to me by Google for their ad targeting: this is worth paying a premium for, and I am glad Kagi are providing this service.<p>You seem to have a very limited understanding of the value Kagi provides.
      • yuugha18389 hours ago
        I have a limited understanding of the value Christianity provides. That neither means that Christianity provides no value, nor does it mean that God exists.
    • miloignis8 hours ago
      With Kagi being $55-$110 a year and Google making &gt;$200 a year per US user, it&#x27;s arguably a discount.
    • Nextgrid7 hours ago
      Users pay a premium to have Google&#x27;s results cleaned out of spam&#x2F;trash. It&#x27;s effectively paying someone to cut out the newspaper ads for you and then give you the resulting ad-free paper.
  • echelon6 hours ago
    If there are any Kagi folks here, I&#x27;ve come up with a new angle to attack Google&#x27;s anti-competitive position that could be incredibly effective:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46681985">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46681985</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44546519">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44546519</a><p>I&#x27;m going to send this idea to my legislators, the EU, Sam Altman, Tim Sweeny, and Elon Musk, et al., I just haven&#x27;t had time to put this together yet.<p>Google is a monopolist scourge and needs to be knocked down a peg or two.<p>This should also apply to the iPhone and Android app stores.
  • ares62313 hours ago
    Kagi should start building an index of sites that are trying to escape the current slop internet. It’s know they have the Small Web thing. But I’d like to see an index of a “neo internet” that blocks Google et al.
    • z6411 hours ago
      I&#x27;ve been tossing around the very early idea of seeing what we can do to elevate alcoves of the web such as Gemini[1] through Kagi. I am slightly conscious of that some people might not like us operating in that space, it&#x27;s been on my TODO to poll people about it and take a quick pulse. I love the tech and think we could give it meaningful exposure.<p>Is this along the lines of what you have in mind - any other active efforts you&#x27;re aware of that you think we should look into?<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gemini_(protocol)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gemini_(protocol)</a>
      • freediver11 hours ago
        Relevant <a href="https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb&#x2F;pull&#x2F;425" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kagisearch&#x2F;smallweb&#x2F;pull&#x2F;425</a>
  • ssoid13 hours ago
    [dead]