36 comments

  • adamcharnock4 hours ago
    We&#x27;ve [1] been using Hetzner&#x27;s dedicated servers to provide Kubernetes clusters to our clients for a few years now. The performance is certainly excellent, we typically see request times half. And because the hardware is cheaper we can provide dedicated DevOps engineering time to each client. There are some caveats though:<p>1) A staging cluster for testing updates is really a must. YOLO-ing prod updates on a Sunday is no one&#x27;s idea of fun.<p>2) Application level replication is king, followed by block-level replication (we use OpenEBS&#x2F;Mayastor). After going through all the Postgres operators we found StackGres to (currently) be the best.<p>3) The Ansible playbooks are your assets. Once you have them down and well-commented for a given service then re-deploying that service in other cases (or again in the future) becomes straightforward.<p>4) If you can I&#x27;d recommend a dedicated 10G network to connect your servers. 1G just isn&#x27;t quite enough when it comes to the combined load of prod traffic, plus image pulls, plus inter-service traffic. This also gives a 10x latency improvement over AWS intra-az.<p>5) If you want network redundancy you can create a 1G vSwitch (VLAN) on the 1G ports for internal use. Give each server a loopback IP, then use BGP to distribute routes (bird).<p>6) MinIO clusters (via the operator) are not that tricky to operate as long as you follow the well trodden path. This provides you with local high-bandwidth, low-latency object storage.<p>7) The initial investment to do this does take time. I&#x27;d put it at 2-4 months of undistracted skilled engineering time.<p>8) You can still push ancillary&#x2F;annoying tasks off onto cloud providers (personally I&#x27;m a fan of CloudFlare for HTTP load balancing).<p>[1]: <a href="https:&#x2F;&#x2F;lithus.eu" rel="nofollow">https:&#x2F;&#x2F;lithus.eu</a>
    • bigbones2 hours ago
      &gt; dedicated 10G network to connect your servers<p>Do you have to ask Hetzner nicely for this? They have a publicly documented 10G uplink option, but that is for external networking and IMHO heavily limited (20TB limit). For internal cluster IO 20TB could easily become a problem
      • adamcharnock1 hour ago
        It is under their costing for &#x27;additional hardware&#x27;[1]. You need to factor in the switch, uplink for each server, and the NIC for each server.<p>[1]: <a href="https:&#x2F;&#x2F;docs.hetzner.com&#x2F;robot&#x2F;general&#x2F;pricing&#x2F;price-list-for-additional-products&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.hetzner.com&#x2F;robot&#x2F;general&#x2F;pricing&#x2F;price-list-fo...</a>
      • nh22 hours ago
        Hetzner does not charge for internal bandwidth.
    • bambambazooka1 hour ago
      &gt; 5) If you want network redundancy you can create a 1G vSwitch (VLAN) on the 1G ports for internal use. Give each server a loopback IP, then use BGP to distribute routes (bird).<p>Are you willing to share example config for that part?
      • adamcharnock1 hour ago
        I don&#x27;t have one I can share publicly, but if you send me an email I&#x27;ll see what I can do :-) Email is in my profile.<p>You&#x27;ll need a bit of baseline networking knowledge.
    • sureIy4 hours ago
      &gt; I&#x27;d put it at 2-4 months of undistracted skilled engineering time.<p>How much is that worth to your company&#x2F;customer vs a higher monthly bill for the next 5 years?<p>As a consultancy company, you want to sell that. As a customer, I don&#x27;t see how that&#x27;s worth it at all, unless I expect a 10k&#x2F;month AWS bill.<p>xkcd comes to mind: <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;1319&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;1319&#x2F;</a>
      • adamcharnock4 hours ago
        &gt; As a consultancy company, you want to sell that. As a customer, I don&#x27;t see how that&#x27;s worth it at all.<p>Well I do rather agree, but as a consultancy I&#x27;m biased.<p>But let&#x27;s do some math. Say it&#x27;s 4 months (because who has uninterrupted time), a senior rate of $1000&#x2F;day. 20 days a month, so 80 days, is an $80k outlay. That&#x27;s assuming you can get the skills (because AWS et al like to hire these kinds of engineers).<p>Say one wants a 3 year payback, that is $2,200&#x2F;month savings you need. Which seems highly achievable given some of the cloud spends I&#x27;ve seen, and that I think an 80-90% reduction in cloud spend is a good ballpark.<p>The appeal of a consultancy is that we&#x27;ll remove the up-front investment, provide the skills, de-risk the whole endeavour, even put engineers within your team, but you&#x27;ll _only_ save 50%.<p>The latter option is much more appealing in terms of hiring, risk, and cash-flow. But if your company has the skills, the cash, and the risk tolerance then maybe the former approach is best.<p>EDIT: I actually think the(&#x2F;our) consultancy option is a really good idea for startups. Their infrastructure ends up being slightly over-built to start with, but very quickly they end up saving a lot of money, and they also get DevOps staffing without having to hire for it. Moreover, the DevOps resource available to them scales with their compute needs. (also we offer 2x the amount of DevOps days for startups for the first year to help them get up and running).
        • nkmnz3 hours ago
          this assumes there are no devops&#x2F;consulting cost to setup something with AWS. My experience is that &quot;the aws way of doing XYZ&quot; is almost as complicated as doing it the non-AWS-way. On top of that: the non-AWS-way is much more portable across hosting providers, so you decrease your business risks considerably.
          • adamcharnock3 hours ago
            I wholeheartedly agree, I&#x27;m trying to be generous as I know I have a bias here.<p>I think the AWS way made clear sense in the days before the current generation of tooling existed, when we were SSH-ing into our snowflake servers (for example). But now we have tools like Kubernetes&#x2F;Nomad&#x2F;OpenShift&#x2F;etc&#x2F;etc, the logic just doesn&#x27;t seem to add up any more.<p>The main argument against it is generally of the form, &quot;Yes, but we don&#x27;t want to hire for non-cloud&#x2F;bare-metal&quot;. Which is why I think a consultancy provides a good middle ground here – trading off cost savings against business factors.
            • nkmnz1 hour ago
              Can you recommend any resources on how to approach the topic for a startup? Most startups have very similar needs, but every single &quot;batteries included&quot; solution that I&#x27;ve encountered so far explicitly excluded infrastructure and DevOps – either because it&#x27;s out of scope for the creators, or because that&#x27;s what they monetize (e.g. supabase).
              • adamcharnock37 minutes ago
                How about we have a chat? I think it is hard for startups to justify implementing this infrastructure from scratch because that is a lot of time &amp; skills that are really best focussed elsewhere.<p>Ping me an email (see bio), always happy to chat.
  • tutfbhuf16 hours ago
    I have experience running Kubernetes clusters on Hetzner dedicated servers, as well as working with a range of fully or highly managed services like Aurora, S3, and ECS Fargate.<p>From my experience, the cloud bill on Hetzner can sometimes be as low as 20% of an equivalent AWS bill. However, this cost advantage comes with significant trade-offs.<p>On Kubernetes with Hetzner, we managed a Ceph cluster using NVMe storage, MariaDB operators, Cilium for networking, and ArgoCD for deploying Helm charts. We had to handle Kubernetes cluster updates ourselves, which included facing a complete cluster failure at one point. We also encountered various bugs in both Kubernetes and Ceph, many of which were documented in GitHub issues and Ceph trackers. The list of tasks to manage and monitor was endless. Depending on the number of workloads and the overall complexity of the environment, maintaining such a setup can quickly become a full-time job for a DevOps team.<p>In contrast, using AWS or other major cloud providers allows for a more hands-off setup. With managed services, maintenance often requires significantly less effort, reducing the operational burden on your team.<p>In essence, with AWS, your DevOps workload is reduced by a significant factor, while on Hetzner, your cloud bill is significantly lower.<p>Determining which option is more cost-effective requires a thorough TCO (Total Cost of Ownership) analysis. While Hetzner may seem cheaper upfront, the additional hours required for DevOps work can offset those savings.
    • supriyo-biswas7 hours ago
      This is definitely some ChatGPT output being posted here and your post history also has a lot of this &quot;While X, Y also does Z. Y already overlaps with X&quot; output.<p>I&#x27;d like to see your breakdowns as well, given that the cost difference between a 2 vCPU, 4GB configuration (as an example) and a similar configuration on AWS is priced much higher.<p>There&#x27;s also <a href="https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzner">https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzne...</a> to reduce the operational burden that you speak of.
      • tutfbhuf5 hours ago
        It is my ouput, but I use ChatGPT to fix my spelling and grammar. Maybe my prompt for that should be refined in order to not alter the wording too much.
        • redbell4 hours ago
          While using ChatGPT for enhancing your writings is not wrong by any means, reviewing the generated output and re-editing when necessary is essential to avoid <i>robotic</i> writing style that may smell unhuman. For instance, these successive paragraphs: &quot;In contrast, using AWS..&quot; and &quot;In essence, with AWS..&quot; leaves a bad taste in your brain when read consecutively.
        • lproven2 hours ago
          &gt; I use ChatGPT to fix my spelling and grammar<p>I have a better suggestion, which will save time, energy, money, and human work.<p>Don&#x27;t.<p>Write it yourself. If you can&#x27;t, don&#x27;t post.
          • simtel201 hour ago
            Why would you want to restrict contributions from people with relevant experience and willingness to share, just because the author ran a spelling and grammar check?
            • mbreese10 minutes ago
              It’s overkill for this audience. HN is pretty forgiving of spelling and grammar mistakes, so long as the main information is clear. I’d encourage anyone that wants to share a comment here to not use an LLM to help, but just try your best to write it out yourself.<p>Really - your comment on its own is good enough without the LLM. (And if you find an error, you can always edit!)<p>If we really wanted ChatGPT’s input on a topic (or a rewording of your comment), we can always ask ChatGPT ourselves.
            • theshrike7946 minutes ago
              Unless the spelling and grammar is HORRENDOUS people won&#x27;t really care. Bad English is the words most used language, we all deal with it every day.<p>Just using your browser&#x27;s built-in proofreader is enough in 99.9% of the cases.<p>Using ChatGPT to rewrite your ideas will make them feel formulaic (LLMs have a style and people exposed to them will spot it instantly, like a code smell) and usually needlessly verbose.
            • supriyo-biswas56 minutes ago
              Everyone claims it’s a spelling and grammar check, but it’s the OP trying to spread “we tried running self-managed clusters on Hetzner and it only saved us 20% while being a chore in terms of upkeep” into a full essay that causes all that annoying filler.<p>You’d assume people would use tools to deliver a better and well composed message; whereas most people try to use LLMs to decompress their text into an inefficient representation. Why this is I have no idea, but I’d rather have the raw unfiltered thought from a fellow human rather than someone trying to sound fancy and important.<p>Not to say I still find the 20% claim a little suspect.
              • tuukkah27 minutes ago
                You do realize it wasn&#x27;t &quot;saved us 20%&quot; but &quot;Hetzner can sometimes be as low as 20% of an equivalent AWS bill&quot; ie saved 80%?
      • 0xFF01236 hours ago
        While I agree that your characterisation is true for a lot of chatgpt output, it can also be true for a human explaining their nuanced point of view.
        • ratg1344 minutes ago
          Most humans don&#x27;t say a couple sentences and then re-summarize them 3 more times unless they are speaking to someone with a learning disability.
    • MathMonkeyMan10 hours ago
      I&#x27;ve never operated a kubernetes cluster except for a toy dev cluster for reproducing support issues.<p>One day it broke because of something to do with certificates (not that it was easy to determine the underlying problem). There was plenty of information online about which incantations were necessary to get it working again, but instead I nuked it from orbit and rebuilt the cluster. From then on I did this every few weeks.<p>A real kubernetes operator would have tooling in place to automatically upgrade certs and who knows what else. I imagine a company would have to pay such an operator.
      • _bare_metal6 hours ago
        This.<p>I run BareMetalSavings.com[0], a toy for ballpark-estimating bare-metal&#x2F;cloud savings, and the companies that have it hardest to move away from the cloud are those who are highly dependent on Kubernetes.<p>It&#x27;s great for the devs but I wouldn&#x27;t want to operate a cluster.<p>[0]: <a href="https:&#x2F;&#x2F;www.BareMetalSavings.com" rel="nofollow">https:&#x2F;&#x2F;www.BareMetalSavings.com</a>
      • declan_roberts8 hours ago
        That&#x27;s just not how it works on any scale other than &quot;toy&quot;
        • MathMonkeyMan8 hours ago
          Right, but certs get out of date unless somebody does something about it, that was my point.
    • KaiserPro2 hours ago
      Ceph is a bastard to run. Its expensive, slow and just not really ready. Yes I know people use it, but compared to a fully grown up system (ie lustre[don&#x27;t its raid 0 in prod] or GPFS [great but expensive]) its just a massive time sync.<p>You are much better off having a bunch of smaller file systems exported over NFS make sure that you have block level replication. Single address space filesystems are ok and convenient, but most of the time are not worth the cost of admin to get <i>reliable</i> at scale. like a DB shard your filesystems, especially as you can easily add mapping logic to kubernetes to make sure you get the right storage to the right image.
      • sgarland1 hour ago
        I agree that it is hideously complicated (to anyone saying “just use Rook,” I’ll counter that if you haven’t read through Ceph’s docs in full, you’re deluding yourself that you know how to run it), but given that CERN uses it at massive scale, I think it’s definitely prod-ready.
    • freedomben5 hours ago
      I mostly agree, but it surprises me that people don&#x27;t often consider a solution right in the center, such as openshift. Have a much, much less burden for devops and have all the power and flexibility of running on bare metal. It&#x27;s a great hybrid between a fully managed and expensive service versus a complete build your own. It&#x27;s expensive enough. Todd, for startups it is not likely a good option, but if you have a cluster with at least 72 GB of RAM or 36 CPUs going (about 9 mid size nodes), you should definitely consider something like openshift.
    • mountainriver10 hours ago
      Manually updating k8s clusters is a huge tradeoff. I can’t imagine doing that to save a couple bucks unless I was desperate
      • TheDong4 hours ago
        I dunno, I&#x27;ve had to spend like two or three hours each month on updating mine for its entire lifetime (of over 5 years now), and that includes losing entire nodes to hardware failure and spinning up new ones.<p>Originally it was ansible, and so spinning up a new node or updating all nodes was editing one file (k8s version and ssh node list), and then running one ansible command.<p>Now I&#x27;m using nixos, so updating is just bumping the version number, a hash, and typing &quot;colmena apply&quot;.<p>Even migrating the k8s cluster from ansible to nixos was quite easy, I just swapped one node at a time and it all worked.<p>People are so afraid of just like learning basic linux sysadmin operations, and yet it also makes it way easier to understand and debug the system too, so it pays off.<p>I had to help someone else with their EKS cluster, and in the end debugging the weird EKS AMI was a nightmare and required spending more time than all the time I&#x27;ve had to spend on my own cluster over the last year combined.<p>From my perspective, using EKS both costs more money, gives you a worse K8s (you can&#x27;t use beta features, their ami sucks), and also pushes you to have a worse understanding of the system so that you can&#x27;t understand bugs as easily and when it breaks it&#x27;s worse.
      • dijit5 hours ago
        if the &quot;couple of bucks&quot; ends up being the cost of an entire team, then hire a small team to do it.<p>Then get mad at them because they don&#x27;t &quot;produce value&quot;, and fold it into a developers job with an even higher level of abstraction again. This is what we always do.
    • spwa414 hours ago
      &gt; Determining which option is more cost-effective requires a thorough TCO (Total Cost of Ownership) analysis. While Hetzner may seem cheaper upfront, the additional hours required for DevOps work can offset those savings.<p>Sure, but the TLDR is going to be that if you employ n or more sysadmins, the cost savings will dominate. With 2 &lt; n &lt; 7. So for a given company size, Hetzner will start being cheaper at some point, and it will become more extreme the bigger you go.<p>Second if you have a &quot;big&quot; cost, whatever it is, bandwidth, disk space (essentially anything but compute), cost savings will dominate faster.
      • stackskipton11 hours ago
        Not always. Employing Sysadmins doesn&#x27;t mean Hetzner is cheaper because those &quot;Sysadmin&#x2F;Ops type people&quot; are being hired to managed the Kubernetes cluster. And Ops type people who truly know Kubernetes are not cheap.<p>Sure, you can get away with legoing some K3S stuff together for a while but one major outage later, and that cost saving might have entirely disappeared.
        • srockets9 hours ago
          More than that: the more you use, the more discounts you can get from a major CSP, which would also reduce the TCO for using a managed service.
      • UltraSane9 hours ago
        Even a short outage can completely wipe out any savings.
    • LordMignion8 hours ago
      Yep. That&#x27;s why we are building a managed service that runs on Hetzner, for lower costs, but still offers all the comforts of a managed service. <a href="https:&#x2F;&#x2F;cloud.gigahatch.ch&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cloud.gigahatch.ch&#x2F;</a> Obviously it&#x27;s still in beta and not finished yet, so I wouldn&#x27;t run my whole production on it, but it&#x27;s very convenient for things like build agents.
    • kshri249 hours ago
      Is it just me or do the last 3 paragraphs feel like ChatGPT output?
      • tutfbhuf5 hours ago
        I used GPT4o to fix all my spelling and grammar mistakes, maybe it went a little too far, but this is 100% my comment
        • lproven2 hours ago
          &gt; this is 100% my comment<p>No, it is not.
      • runeks6 hours ago
        Isn&#x27;t the point of ChatGPT to mimic sentences written by humans?
        • perching_aix6 hours ago
          Kind of. But which humans? It&#x27;s a bit like how the average person doesn&#x27;t exist, except in the LLM world, now it does.
        • murderfs6 hours ago
          GPT-4 is, but ChatGPT is fine-tuned to emit sentences that get rated well (by humans, and by raters trained to mimic human evaluation) in a conversational agent context.
      • andai7 hours ago
        Yeah, I was wondering the same thing.
  • mythz5 hours ago
    Been a happy Hetzner customer for over a decade, previously using their dedicated servers in their German DC&#x27;s before migrating to their Cloud US VMs for better latency with the US. Slightly disappointed with their recent cuts of their generous 20TB free traffic down to 3TB (€1.19 per additional TB), but they still look to be a lot better value than all other US cloud providers we&#x27;ve evaluated.<p>Whilst I wouldn&#x27;t run Kubernetes by choice, we&#x27;ve had success moving our custom SSH &#x2F; Docker compose deployments over to use GitHub Actions with kamal-deploy.org, easy to setup and nice UX tools for monitoring remote deployed apps [1]<p>[1] <a href="https:&#x2F;&#x2F;servicestack.net&#x2F;posts&#x2F;kamal-deployments" rel="nofollow">https:&#x2F;&#x2F;servicestack.net&#x2F;posts&#x2F;kamal-deployments</a>
    • Voultapher5 hours ago
      Seems to be a US thing, maybe their peering partners are forcing them to raise prices, the German DC still stells the 20TB bandwidth <a href="https:&#x2F;&#x2F;www.hetzner.com&#x2F;cloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.hetzner.com&#x2F;cloud&#x2F;</a>, but US is an order of magnitude less for the same price :&#x2F;
      • inemesitaffia5 hours ago
        I don&#x27;t see how traffic in Ashburn is more expensive than Frankfurt and Amsterdam.<p>It&#x27;s the sort of place where people say Transit is cheaper than paid peering. (For eyeball networks at least).<p>I think carrying traffic from Europe for some images and videos might make sense financially. But there&#x27;s always bulk CDN&#x27;s
        • kuschku3 hours ago
          &gt; I don&#x27;t see how traffic in Ashburn is more expensive than Frankfurt and Amsterdam.<p>The vast majority of Hetzner&#x27;s traffic in europe (and tbh, anyone&#x27;s traffic) is <i>free peering</i>. Telekom is the one major exception.
  • jonas2116 hours ago
    This is an interesting writeup, but I feel like it&#x27;s missing a description of the cluster and the workload that&#x27;s running on it.<p>How many nodes are there, how much traffic does it receive, what are the uptime and latency requirements?<p>And what&#x27;s the absolute cost savings? Saving 75% of $100K&#x2F;mo is very different from saving 75% of $100&#x2F;mo.
    • jpgvm3 hours ago
      In my experience noone bothers unless they are using GPUs or they are already at 100k&#x2F;mo.<p>I do think 100k&#x2F;mo is the tipping point actually, that is $1.2M&#x2F;yr.<p>It costs around $400k&#x2F;yr in engineering salaries to reasonably support a sophisticated bare metal deployment (though such people can generally do that AND provide a lot of value elsewhere in the business, so really it&#x27;s actual cost is lower than this) and about $100k&#x2F;yr in DC commitments, HW amortisation, and BW roughly. So you save around $700k a year which is great but the benefit becomes much greater when your equiv cloud spend is even bigger than that.
  • slillibri19 hours ago
    When I worked in web hosting (more than 10 years ago), we would constantly be blackholeing Hetzner IPs due to bad behavior. Same with every other budget&#x2F;cheap vm provider. For us, it had nothing to do with geo databases, just behavior.<p>You get what you pay for, and all that.
    • haroldp5 hours ago
      It&#x27;s always evolving, but these days the most common platforms attacking sites that I host are the big cloud providers, especially Azure. But AWS, Google, Digital Ocean, Linode, Contabo, etc all host a lot of attacks trying to brute-force logins and search for common exploits.
    • SoftTalker18 hours ago
      Yep I had the same problem years ago when I tried to use Mailgun&#x27;s free tier. Not picking on them, I loved the features of their product but the free tier IPs had a horrble reputation and mail just would not get accepted especially by hotmail or yahoo.<p>Any free hosting service will be overwhelmed by spammers and fraudsters. Cheap services the same but less so, and the more expensive they are the less they will be used for scams and spams.
      • thwarted16 hours ago
        Tragedy of the Commons Ruins Everything Around Me.
    • mzhaase2 hours ago
      I had to try multiple floating IPs on hcloud before I got one that wasn&#x27;t blacklisted on the k8s repos.
    • UltraSane9 hours ago
      AWS tries hard to keep its public IPs from getting on banlists.
    • Keyframe16 hours ago
      depending on the prices, maybe a valid strategy would be to have servers at hetzner and then tunnel ingress&#x2F;egress somewhere more prominent. Maybe adding the network traffic to the calculation still makes financial sense?
      • srockets9 hours ago
        At 0.02$&#x2F;GB, it rarely does.
    • oblio18 hours ago
      They could put the backend on Hetzner, if it makes sense (for example queues or batch processors).
  • surrTurr6 hours ago
    &gt; Hetzner volumes are, in my experience, too slow for a production database. While you may in the past have had a good experience running customer-facing databases on AWS EBS, with Hetzner&#x27;s volumes we were seeing &gt;50ms of IOWAIT with very low IOPS. See <a href="https:&#x2F;&#x2F;github.com&#x2F;rook&#x2F;rook&#x2F;issues&#x2F;14999">https:&#x2F;&#x2F;github.com&#x2F;rook&#x2F;rook&#x2F;issues&#x2F;14999</a> for benchmarks.<p>I set up rook ceph on a talos k8s cluster (with vm volumes) and experienced similar low performance; however, I always thought that was because of the 1Gi vSwitch (i.e. networking problem)?! The SSD volumes were quite fast.
    • tehlike1 hour ago
      SSD volumes are physically on the same node, and afaik not redundant. The cloud vms are ceph clusters behind the scenes, and writes need to commit for 3+ machines. It&#x27;s both network latency and inherent process latency<p>Additionally, hetzner has an IOPS limit of 5000 and write limit of some amount that does not scale with the size of database.<p>50G has the same limits as 5TB.<p>For this reason, people are sometimes using different table spaces in postgres for example.<p>Ceph puts another burden on top of already-ceph-based cloud volumes, btw, so don&#x27;t do that.
    • merpkz6 hours ago
      In my limited experience with rook-ceph it is strictly bare metal technology to deploy. On virtualization it will basically replicate your data to VM disks which usually are already replicated, so quite a bit of replication amplification will happen and tank your performance.
  • wvh2 hours ago
    I work for a consultancy company that helps companies building and securing infrastructure. We have a lot of customers running Kubernetes at low-cost providers (like Hetzner), more local middle-tier and top-three (AWS, GCP, Azure). We also have some governmental, financial and medical companies that can not or will not run in public clouds, so they usually host on-prem.<p>If Hetzner has an issue or glitch once a month, the middle-tier providers have one every 2-3 months, and a place like AWS maybe every 5-6 months. However, prices also follow that observation, so you have to carefully consider on a case-by-case basis whether adding some extra machines and backup and failure scenarios is a better deal.<p>The major benefit by using basic hosting services is that their pricing is a lot more predictable; you pay for machines and scale as you go. Once you get hooked into all the extra services a provider like AWS provides, you might get some unexpectedly high bills and moving away might be a lot harder. For smaller companies, don&#x27;t make short-sighted decisions that threaten your ability to survive long-term by choosing the easy solution or &quot;free credits&quot; scheme early on.<p>There is no right answer here, just trade-offs.
  • Hetzner_OL4 hours ago
    Hi Bill, Wow! Thanks for the amazing write-up and for sharing it on your blog and here! I am so happy that we&#x27;ve helped you save so much money and that you&#x27;re happy with our support team! It&#x27;s a great way to start off the week! --Katie
  • esher6 hours ago
    As far as I see, no one is mentioning sustainability AKA environmental impact or &#x27;green hosting&#x27; here. Don&#x27;t you care about that?<p>I believe that Hetzner data centers in Europe (Germany, Finland) are powered by green energy, but not the locations in US.
    • huijzer6 hours ago
      Data centers used 460 TWh, or about 2% of total worldwide electricity use, according to IEA in 2022.<p>In comparison, 30% of total energy (energy! Not electricity) goes to transport!<p>As another point of comparison, transport in Sweden in 2022 used 137 TWh [1]. So the same order of magnitude as total datacenter energy use.<p>And datacenters are powered by electricity which increases the chance that it comes from renewable energy. Conversely, the chance that diesel comes from a renewable source is zero.<p>So can we please stop talking about data center energy use? It’s a narrative that the media is currently pushing but as so many things it makes no sense. It’s not the thing we should be focusing on if we want to decrease fossil fuel use.<p>[1]: <a href="https:&#x2F;&#x2F;www.energimyndigheten.se&#x2F;en&#x2F;energysystem&#x2F;energy-consumption&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.energimyndigheten.se&#x2F;en&#x2F;energysystem&#x2F;energy-cons...</a>
      • davedx5 hours ago
        2% of total worldwide electricity use in 2022 is a shit load of electricity and emissions. Your argument is the same as those who argue &quot;our country shouldn&#x27;t care about emissions when China is the biggest emitter&quot;.<p>If you dive into a detailed breakdown of emissions you&#x27;ll find that it&#x27;s a complex hierarchy of categories. You can&#x27;t just fix &quot;all of transport&quot; or treat it like a &quot;low hanging fruit&quot;, just look at how much time it&#x27;s taken for EV penetration to be in any way significant; look at how much of transport emissions are from aviation or shipping or other components.<p>Any energy use that&#x27;s measurable in whole percentage points of global emissions needs addressing. That includes data centers.
        • huijzer2 hours ago
          &gt; Your argument is the same as those who argue &quot;our country shouldn&#x27;t care about emissions when China is the biggest emitter&quot;.<p>China and the US are in the same order of magnitude in emissions. So NO that&#x27;s absolutely not the argument I am making.<p>&gt; Any energy use that&#x27;s measurable in whole percentage points of global emissions needs addressing<p>But it isn&#x27;t! That&#x27;s my point. Electricity use is about 20% of total energy use. So if we talk about global emissions, data center is only about 20% * 2% = 0.4% of total energy use.<p>And then if we talk about total emittance, it&#x27;s even lower because 40% of electricity is generated from nuclear and renewables.<p>&gt; just look at how much time it&#x27;s taken for EV penetration to be in any way significant<p>Yes so let&#x27;s focus on that instead of data centers. Data centers are not the problem!<p>EDIT: Also CPUs and GPUs are still becoming more energy efficient. So I&#x27;m a bit skeptical of extrapolations which say that data centers will consume a large percentage of US energy. If the number of CPUs and GPUs doubles each 2 years, but energy efficiency doubles too, then overall energy usage doesn&#x27;t grow so fast. Especially if old CPUs and GPUs are taken out of the system over time because they become too expensive to operate.
        • alt2274 hours ago
          &gt; our country shouldn&#x27;t care about emissions when China is the biggest emitter<p>To be fair, until China does something about their emissions, the rest of us are just pissing in the ocean.
          • doix3 hours ago
            Eh, per capita China has lower emissions than the US whilst manufacturing and exporting significantly more.<p>Everything is intertwined and tightly coupled, such simple statements are rarely accurate.
          • sofixa3 hours ago
            China is actively working on reducing their emissions (they&#x27;re building tons of nuclear and renewables, and have long term plans for both), and a lot of theirs are to manufacture stuff the whole world uses.
          • blitzar3 hours ago
            Don&#x27;t shit in your own back yard, no matter what other people like to do.
      • esher5 hours ago
        Thanks for sharing. I care about it. I run a small hosting company. Sure, there are many low hanging fruits for fighting CO2 emissions that should be tackled first. I am also hopeful that energy from directly available renewables will be the most economic choice for building data centers anyhow, so that this is not a matter of believes any more.<p>But on the other side, to bring down CO2 levels, fast change everywhere is required. As far as I see data center energy consumption continues to grow, specifically with AI.<p>If I am not mistaken, data centers produce more CO2 than aviation.<p>And sure, most &#x27;green hosting&#x27; is probably &#x27;green washing&#x27;, yet I would still support and link initiatives such as: <a href="https:&#x2F;&#x2F;www.thegreenwebfoundation.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.thegreenwebfoundation.org&#x2F;</a>
    • preisschild5 hours ago
      They probably just use the local power grid. You can use ElectricityMaps to look up the average carbon intensity per kWh of those grids<p><a href="https:&#x2F;&#x2F;app.electricitymaps.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;app.electricitymaps.com&#x2F;</a>
      • kuschku3 hours ago
        You can choose which electricity company provides the amount of power to the grid that you&#x27;re using. While you don&#x27;t get &quot;your&quot; electricity, overall you can still affect the carbon balance of the electricity that&#x27;s produced in your name.<p>Hetzner is using 100% green hydro and wind power for that, which is as sustainable as any grid-connected company can be.
      • sofixa3 hours ago
        &gt; They probably just use the local power grid<p>A lot of EU datacenter providers specifically pick green electricity providers&#x2F;sources, and pride themselves on it, and use it in advertising their sustainability.<p>Scaleway in particular are 100% no-CO2 (they have it easy, most of their DCs are in France where it&#x27;s easy to be fully nuclear+renewable). Hetzner are the same.
    • postepowanieadm4 hours ago
      &gt; I believe that Hetzner data centers in Europe (Germany, Finland) are powered by green energy, but not the locations in US.<p>Green lignite.
      • kuschku3 hours ago
        While fans of nuclear energy like to meme about the German power grid, Hetzner is — in so far as anyone with a grid connection can be — powered by 100% green wind and hydro energy.<p>You can see the paperwork here:<p>- <a href="https:&#x2F;&#x2F;cdn.hetzner.com&#x2F;assets&#x2F;Uploads&#x2F;oekostrom-zertifikat-2025.pdf" rel="nofollow">https:&#x2F;&#x2F;cdn.hetzner.com&#x2F;assets&#x2F;Uploads&#x2F;oekostrom-zertifikat-...</a><p>- <a href="https:&#x2F;&#x2F;cdn.hetzner.com&#x2F;assets&#x2F;Oomi-sertifikaatti-tuuli+vesi-Hetzner-2024-eng.pdf" rel="nofollow">https:&#x2F;&#x2F;cdn.hetzner.com&#x2F;assets&#x2F;Oomi-sertifikaatti-tuuli+vesi...</a>
  • Volundr19 hours ago
    I haven&#x27;t used it personally, but <a href="https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzner">https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzne...</a> looks amazing as a way to setup and manage kubernetes on Hetzner. At the moment I&#x27;m on Oracle free tier, but I keep thinking about switching to it to get off... Well Oracle.
    • mkreis18 hours ago
      I&#x27;m running two clusters on it, on for production and one for dev. Works pretty good. With a schedule to reboot machines every sunday for automatic security updates (SuSE Micro OS). Also expanded machines for increased workloads. You have to make sure to inspect every change terraform wants to do, but then you&#x27;re pretty save. The only downside is that every node needs a public IP, even though they are behind a firewall. But that is being worked on.
    • not_elodin19 hours ago
      I&#x27;ve used this to set up a cluster to host a dogfooded journalling site.<p>In one evening I had a cluster working.<p>It works pretty well. I had one small problem when the auto-update wouldn&#x27;t run on arm nodes which stopped the single node I had running at that point (with the control plane taint blocking the update pod running on them).
    • maestrae16 hours ago
      i recently read an article about running k8s on the oracle free tier and was looking to try it. i&#x27;m curious, are there any specific pain points that are making you think of switching?
      • Volundr14 hours ago
        Nope, just Oracle being a corp with a nasty reputation. Honesty it was easy to set up and has been super stable, and if you go ARM the amount of resources you get for free is crazy. I actually do recommend it for personal projects on the like. I&#x27;d just be hesitant about building a business based on any Oracle offering.
        • maestrae7 hours ago
          Got it, thanks for the clarification! I’ll be using it for a personal project so that sounds great.
      • davidgl5 hours ago
        I&#x27;ve got a couple of free arm machines setup as a cluster for learning k8 + a few LB in front of it. I use k3s, with pg rather than etcd. Been a great learning experience.
    • preisschild5 hours ago
      Ive also been using Cluster-API + Cluster-API-Provider-Hetzner<p><a href="https:&#x2F;&#x2F;github.com&#x2F;syself&#x2F;cluster-api-provider-hetzner">https:&#x2F;&#x2F;github.com&#x2F;syself&#x2F;cluster-api-provider-hetzner</a><p>works rock solid
  • jillesvangurp2 hours ago
    The key take home point here is not how amazingly cheap Hetzner is, which it is. But how much of an extortion game Google, Amazon, MS, etc. are playing with their cloud services. These are trillion dollar companies because they are raking in cash with extreme margins.<p>Yes, there is some added value in the level of convenience provided. But maybe with a bit more competition, pricing could be more competitive. A lot more competitive.
  • chipdart19 hours ago
    I loved the article. Insightful, and packed with real world applications. What a gem.<p>I have a side-question pertaining to cost-cutting with Kubernetes. I&#x27;ve been musing over the idea of setting up Kubernetes clusters similar to these ones but mixing on-premises nodes with nodes from the cloud provider. The setup would be something like:<p>- vCPUs for bursty workloads,<p>- bare metal nodes for the performance-oriented workloads required as base-loads,<p>- on-premises nodes for spiky performance-oriented workloads, and dirt-cheap on-demand scaling.<p>What I believe will be the primary unknown is egress costs.<p>Has anyone ever toyed around with the idea?
    • mhuffman18 hours ago
      For dedicated they say this:<p>&gt;All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic.<p>&gt;Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1&#x2F;TB for overusage.<p>So it sounds like it depends. I have used them for (I&#x27;m guessing) 20 years and have never had a network problem with them or a surprise charge. Of course I mostly worked in the low double digit terabytes. But have had servers with them that handled millions of requests per day with zero problems.
      • pdpi17 hours ago
        1 &#x2F; 8 * 3600 * 24 * 30 = 324000 so that 1GBit&#x2F;s server could conceivably get 324TB of traffic per month &quot;for free&quot;. It obviously won&#x27;t, but even a tenth of data is more than the data included with the 10G link.
        • jorams16 hours ago
          They do have a fair use policy on the 1GBit uplink. I know of one report[1] of someone using over 250TB per month getting an email telling them to reduce their traffic usage.<p>The 10GBit uplink is something you need to explicitly request, and presumably it is more limited because if you go through the trouble of requesting it, you likely intend to saturate it fairly consistently, and that server&#x27;s traffic usage is much more likely to be an outlier.<p>[1]: <a href="https:&#x2F;&#x2F;lowendtalk.com&#x2F;discussion&#x2F;180504&#x2F;hetzner-traffic-use-notice-unlimited-unlimited" rel="nofollow">https:&#x2F;&#x2F;lowendtalk.com&#x2F;discussion&#x2F;180504&#x2F;hetzner-traffic-use...</a>
      • lyu0728217 hours ago
        20TB egress on AWS runs you almost $2,000 btw. one of the biggest benefits of Hetzner
      • chipdart16 hours ago
        &gt; We will charge € 1&#x2F;TB for overusage.<p>It sounds like a good tradeoff. The monthly cost of a small vCPU is equivalent to a few TB of bandwidth.
    • adamcharnock4 hours ago
      We&#x27;ve toyed around with this idea for clients that do some data-heavy data-science work. Certainly I could see that running an on-premise Minio cluster could be very useful for providing fast access to data within the office.<p>Of course you could always move the data-science compute workloads to the cluster, but my gut says that bringing the data closer to the people that need it would be the ideal.
    • threeseed17 hours ago
      &gt; Has anyone ever toyed around with the idea?<p>Sidero Omni have done this: <a href="https:&#x2F;&#x2F;omni.siderolabs.com" rel="nofollow">https:&#x2F;&#x2F;omni.siderolabs.com</a><p>They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster. Works really well but unfortunately is a commercial product with a pricing model that is a little inflexible.<p>But at least it shows it&#x27;s technically possible so maybe open source options exist.
      • SOLAR_FIELDS17 hours ago
        You could make a mesh with something like Netmaker to achieve similar using FOSS. Note I haven’t used Netmaker in years but I was able to achieve this in some of their earlier releases. I found it to be a bit buggy and unstable at the time due to it being such young software but it may have matured enough now that it could work in an enterprise grade setup.<p>The sibling comments recommendation, Nebula, does something similar with a slightly different approach.
      • chipdart15 hours ago
        &gt; They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster.<p>Interesting.<p>A quick search shows that some people already toyed with the idea of rolling out something similar.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ivanmorenoj&#x2F;k8s-wireguard">https:&#x2F;&#x2F;github.com&#x2F;ivanmorenoj&#x2F;k8s-wireguard</a>
      • nullify887 hours ago
        I believe the Cilium CNI has this functionality built in. Other CNIs may do also.
      • sneak17 hours ago
        Slack’s Nebula does something similar, and it is open source.
    • kgdkhxkzh19 hours ago
      [flagged]
    • oblio18 hours ago
      I&#x27;m a bit sad the aggressive comment by the new account was deleted :-(<p>The comment was making fun of the wishful thinking and the realities of networking.<p>It was a funny comment :-(
      • bdcravens17 hours ago
        Enable &quot;showdead&quot; on your profile and you can see it.
      • rad_gruchalski18 hours ago
        It wasn’t funny. I can still see it. The answer was vpn. If you want to go fancy you can do istio with vms.
        • ffsm818 hours ago
          And if you wanna be lazy, there is a tailscale integration to run the cluster communication over it.<p><a href="https:&#x2F;&#x2F;tailscale.com&#x2F;kb&#x2F;1236&#x2F;kubernetes-operator" rel="nofollow">https:&#x2F;&#x2F;tailscale.com&#x2F;kb&#x2F;1236&#x2F;kubernetes-operator</a><p>They&#x27;ve even improved it, so you can now actually resolve the services etc via the tailnet dns<p><a href="https:&#x2F;&#x2F;tailscale.com&#x2F;learn&#x2F;managing-access-to-kubernetes-with-tailscale" rel="nofollow">https:&#x2F;&#x2F;tailscale.com&#x2F;learn&#x2F;managing-access-to-kubernetes-wi...</a><p>I haven&#x27;t tried that second part though, only read about it.
          • rad_gruchalski18 hours ago
            Okay, vpn it is.
            • ffsm817 hours ago
              I just wanted to provide the link in case someone was interested, I know you already mentioned it 。 ◕ ‿ ◕ 。<p>(Setting up a k8s cluster over software VPN was kinda annoying the last time I tried it manually, but super easy with the tailscale integration)
          • juiyhtybr18 hours ago
            yes, like i said, throw an overlay on that motherfucker and ignore the fact that when a customer request enters the network it does so at the cloud provider, then is proxied off to the final destination, possibly with multiple hops along the way.<p>you can&#x27;t just slap an overlay on and expect everything to work in a reliable and performant manner. yes, it will work for your initial tests, but then shit gets real when you find that the route from datacenter a to datacenter b is asymmetric and&#x2F;or shifts between providers, altering site to site performance on a regular basis.<p>the concept of bursting into on-prem is the most offensive bit about the original comment. when your site traffic is at its highest, you&#x27;re going to add an extra network hop and proxy into the mix with a subset of your traffic getting shipped off to another datacenter over internet quality links.
            • threeseed10 hours ago
              a) Not every Kubernetes cluster is customer facing.<p>b) You should be architecting your platform to accomodate these very common networking scenarios i.e. having edge caching. Because slow backends can be caused by a range of non-networking issues as well.<p>c) Many cloud providers (even large ones like AWS) are hosted in or have special peering relationships with third party DCs e.g. [1]. So there are no &quot;internet quality links&quot; if you host your equipment in one of the major DCs.<p>[1] <a href="https:&#x2F;&#x2F;www.equinix.com.au&#x2F;partners&#x2F;aws" rel="nofollow">https:&#x2F;&#x2F;www.equinix.com.au&#x2F;partners&#x2F;aws</a>
            • chipdart16 hours ago
              &gt; yes, like i said, (...)<p>I&#x27;m sorry, you said absolutely nothing. You just sounded like you were confused and for a moment thought you were posting on 4chan.
            • rad_gruchalski17 hours ago
              Nobody said „do it guerilla-style”. Put some thought into it.
  • acac102 hours ago
    &#x2F;&#x2F; Taking another slant at the discussion: Why kubernetes?<p>Thank you for sharing your experience. I also have my 3 personal servers with Hetzner, plus a couple VM instances in Scaleways (French outfit).<p>Disclaimer: I’m a Googler, was SRE for ~10 years for GMail, identity, social, apps (gsuites nowadays) and more, managed hundreds of jobs in Borg, one of the 3 founders of the current dev+devops internal platform (and I focused on the releases,prod,capacity side of the platform), dabbled in K8s on my personal time. My opinions, not Google’s.<p>So, my question is: given the significant complexity that K8s brings (I don’t think anyone disputes this) why are people using it outside medium-large environments? There are simpler and yet flexible &amp; effective job schedulers that are way easier to manage. Nomad is an example.<p>Unless you have a LOT of machines to manage, with many jobs (I’d say +250) to manage, K8s complexity, brittleness and overhead are not justifiable, IMO.<p>The emergence of tools like Terraform and the <i>many</i> other management layers in top of K8s that try to make it easier but just introduce more complexity and their own abstractions are in itself a sign of that inherent complexity.<p>I would say that only a few companies in the world need that level of complexity. And then they <i>will</i> need it, for sure. But, for most is like buying a Formula 1 to commute in a city.<p>One other aspect that I also noticed is that technical teams tend to carry on the mess they had in their previous “legacy” environment and just replicate in K8s, instead of trying to do an architectural design of the whole system needs. And K8s model enables that kind of mess: a “bucket of things”.<p>Those two things combined, mean that nowadays every company has soaring cloud costs, are running things they know nothing about but are afraid to touch in case of breaking something. And an outage is more career harming than a high bill that Finance will deal with it later, so why risk it, right? A whole new IT area has been coined now to deal with this: FinOps :facepalm:<p>I’m just puzzled by the whole situation, tbh.
    • KaiserPro1 hour ago
      I too used to run a large clustered environment (VFX) and now work at a FAANG which has a &quot;borg-like&quot; scheduler.<p>K8s has a whole kit of parts which sound really grand when you are starting out on a new platform, but quickly become a pain when you actually start to implement it. I think thats the biggest problem, is by the time you&#x27;ve realised that actualy you don&#x27;t need k8s, you&#x27;ve invested so much time into learning the sodding thing, its difficult to back out.<p>The other seductive thing is helm provides &quot;AWS-like&quot; features (ie fancy load balancing rules) that are hard to figure out unless you&#x27;ve dabbled with the underlying tech before (varnish&#x2F;nginx&#x2F;etc are daunting, so is storage and networking)<p>this tends to lead to utterly fucking stupid networking systems because unless you know better, that looks normal.
  • hipadev2318 hours ago
    Be careful with Hetzner, they null routed my game server on launch day due to false positives from their abuse system, and then took 3 days for their support team to re-enable traffic.<p>By that point I had already moved to a different provider of course.
    • danpalmer16 hours ago
      Digital Ocean did this to my previous company. They said we’d been the target of a DOS attack (no evidence we could see). They re-enabled the traffic, then did it again the next day, and then again. When we asked them to stop doing that they said we should use Cloudflare to prevent DOS attacks… all the box did was store backups that we transferred over SSH. Nothing that could go behind Cloudflare, no web server running, literally only one port open.
    • teitoklien18 hours ago
      where did you move, asking to keep a list of options for my game servers, i’m using ovh game servers atm
      • hipadev2315 hours ago
        I went back to AWS. Expensive but reliable and support I can get ahold of. I’d still like to explore OVH someday though.
        • teitoklien14 hours ago
          Nothing beats aws tbh, the level of extra detail aws adds, like emailing and alerting a gazillion times before making any changes to underlying hardware, even if non disruptive. Robust &lt;24 hour support from detailed, experienced and technical support staff, a very visible customer obsession laced experience all-around. Ovh has issues with randomly taking down vps&#x2F;baremetal instances at random with their support staff having no clue&#x2F;late non-real time data on their instance state, they lost a ton of customer data in their huge datacenter fire 2 yrs ago, didnt even replicate the backups across multiple datacentres like they were supposed to, got sued a ton too.<p>I use OVH because the cost reduction supremely adds up for my workloads (remote video editing&#x2F; custom rendering farm at scale with a lot more cheaper OVH s3 suitable for my temporary but too many asset workload with high egress requirements) but otherwise I miss AWS and get now, just how much superior their support and attention to detail is.
    • ronsor17 hours ago
      Reading comments from the past few days makes it seem like dealing with Hetzner is a pain (and as far as I can tell, they aren&#x27;t really that cheaper than the competitors).
      • gurchik17 hours ago
        &gt; (and as far as I can tell, they aren&#x27;t really that cheaper than the competitors)<p>Can you say more? Their Cloud instances, for example, are less than half the cost of OVH&#x27;s, and less than a fifth of the cost of a comparable AWS EC2 instance.
        • lurking_swe17 hours ago
          even free servers are of no use if it’s not usable during a product launch. :) You get what you pay for i guess.<p>But i do agree, it is much cheaper.
          • vachina11 hours ago
            To be fair what use is a server if you can’t afford to keep it running. This is especially true for very bootstrapped startups.
            • lurking_swe9 hours ago
              We all start somewhere. :) Hetzner can be a good fit for many small companies.<p>But let’s also be honest, if you’re THAT bootstrapped, you probably have no business running kubernetes to begin with. If the company has a short runway, it doesn’t make sense to work on a complex architecture from the start. Focus on shipping something and getting revenue.
      • victorbjorklund15 hours ago
        I don&#x27;t think so. We see the outliers. Those happens at Linode, Digital Ocean, etc also. And yes even at Google Cloud and AWS you sometimes get either unlucky or unfairly treated.
      • jjeaff17 hours ago
        What competitors are similar to Hetzner in pricing? Last I checked, they seemed quite a bit cheaper than most.
        • Frotag10 hours ago
          Forum for cheap hosts:<p><a href="https:&#x2F;&#x2F;lowendtalk.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lowendtalk.com&#x2F;</a><p>Wouldn&#x27;t reccomend any of these outside of personal use though.
        • riku_iki8 hours ago
          OVH is larger provider, servers usually not significantly more expensive than hetzner.
      • jgalt21216 hours ago
        &gt; they aren&#x27;t really that cheaper than the competitors<p>This is demonstrably false.
      • jacooper14 hours ago
        Honestly hetzner supoort has bren outstanding from my experience. They are always there and very responsive using email
        • jpgvm3 hours ago
          If you prefer no bullshit communications they are great. They are to the point, terse and very German. I find this both refreshing and exactly what I want&#x2F;need out of support. The few times I have needed to contact them it&#x27;s been HW related. One was a SSD that was clearly having issues even though SMART reported nothing wrong, I sent them blktrace output and they said yup that checks out, scheduled disk replacement right away. The other time was a network related problem with their transit, I had some ASNs that I was trying to talk to suddenly getting some pretty damn cursed paths and a big increase in latency as a result, they sorted out the path weights super fast and everything has been great since.<p>The only other time I have received better support was from Aussie ISPs. Back in the day when you called Internode the guy who answered the phone was a bona-fide network engineer and would go as far as getting a shell on the DSLAM to check out what is going on. To me that is peak support, live debugging of the problem!<p>Similarly I called into Aussie Broadband to do my first NBN setup, explained I did &quot;BYO&quot; modem because I was going to initiate the PPPoE session with my Linux router and they said no problem. She even offered to send me a cookie cutter pppd config along with the info to set it up myself. Easily the some of the most knowledgeable and &quot;can do&quot; attitude for first layer support I have encountered.<p>Needless to say when I encounter damn good support I stay even when it costs more.
  • Neil444 hours ago
    When I first started hosting servers&#x2F;services for customers I was using EC2 and Rackspace, then I discovered Linode and was happy it was so much cheaper with apparently no downside. After the first couple of interactions with support I started to relax. Then I discovered OVH, same story. I haven&#x27;t needed the support yet though.
  • james_sulivan2 hours ago
    For those considering Hetzner, there is also Contabo which is another German hosting company that is also good, at least in my experience
  • s3rius11 hours ago
    That&#x27;s a really good article. Actually, recently we were migrating as well and we were using dedicated nodes in our setup.<p>In order to integrate a load-balancer provided by hetzner with our k8s on dedicated servers we had to implement a super thin operator that does it: <a href="https:&#x2F;&#x2F;github.com&#x2F;Intreecom&#x2F;robotlb">https:&#x2F;&#x2F;github.com&#x2F;Intreecom&#x2F;robotlb</a><p>If anyone will be inspired by this article and would want to do the same, feel free to use this project.
  • no_carrier13 hours ago
    &gt; While DigitalOcean, like other providers, offers a free managed control plane, there is typically a 100% markup on the nodes that belong to these managed clusters.<p>I don&#x27;t think this is true. With Digital Ocean, the worker nodes are the same cost as regular droplets, there&#x27;s no additional costs involved. This makes Digital Ocean&#x27;s offering very attractive - free control plane you don&#x27;t have to worry about, free upgrades, and some extra integrations to things like the load balancer, storage, etc. I can&#x27;t think of a reason to not go with that over self-managed.
    • czhu1212 hours ago
      The actual nodes are still way more expensive on digital ocean than they are in Hetzner. That’s probably the main reason.<p>8GB RAM, shared cpu on hetzner is ~$10<p>Equivalent on digital ocean is $48
  • mnming4 hours ago
    I feel lots of the work described in the article can be automated by kops, probably in a much better way, especially when it comes to day 2 operations.<p>I wonder what is the motivation behind manually spinning up a cluster instead of going with more established tooling?
  • bittermandel5 hours ago
    We&#x27;re very happy to use Hetzner for our bare-metal staging environments to validate functionality, but I still feel reluctant to put our production there. Disks don&#x27;t quite work as intended at all times and our vSwitch setup has gotten reset more than once.<p>All of this makes sense considering the extremely low price.
  • ArtTimeInvestor17 hours ago
    Can anybody speak to the pros and cons of Hetzner vs OVH?<p>There ain&#x27;t many large European cloud companies, and I would like to understand how they differentiate.<p>Ionos is another European one. Currently, it looks like their cloud business is stagnating, though.
    • Aachen3 hours ago
      My main complaint with OVH is that their checkout process is broken in various ways (missing translations so you get French bits, broken translations so placeholders like ACCEPT_BUTTON leak through, legally binding terms with typos and weird formatting because someone copied them from a PDF into a textarea, UIs from the 90s plastered in between modern ones, missing option to renew a domain for longer than a year, confusing automatic renewal setup, and so on). The control panel in general is quite confusing. They also don&#x27;t allow hosting an email server (port 25 blocked), iirc the docs tell you to go away and use a competitor<p>I didn&#x27;t have any of these web UI issues with Hetzner, but iirc OVH is cheaper for domain names, as well as having very reliable and fast DNS servers (measured various query types across some 6 months), and that&#x27;s why I initially chose them — until my home ISP gave me a burned IP address and I needed an externally hosted server for originating email data (despite it coming from an old and trusted domain that permitlists the IP address) so now I&#x27;m with both OVH and Hetzner... Anyway, another thing I like in OVH is that you can edit the raw zone file data and that they support some of the more exotic record types. I don&#x27;t know how Hetzner compares on domain hosting though
    • j16sdiz2 hours ago
      I use Scaleway for my EU cloud needs.<p>This is a very low usage toy server, can&#x27;t speak for performance&#x2F;cost.
    • thenaturalist16 hours ago
      I&#x27;d say stay clear of Ionos.<p>Bonkers first experience in the last two weeks.<p>Graphical &quot;Data center designer&quot;, no ability to open multiple tabs, instead always rerouting to the main landing page.<p>Attached 3 IGWs to a box, all public IPs, GUI shows &quot;no active firewall rules&quot;.<p>IGW 1: 100% packet loss over 1 minute.<p>IGW 2: 85% packet loss over 1 minute.<p>IGW3: 95% packet loss over 1 minute.<p>Turns out &quot;no active Firewall rules&quot; just wasn&#x27;t the case and explicit whitelisting is absolutely required.<p>But wait, there&#x27;s more!<p>Created a hosted PostgreSQL instance, assigned a private subnet for creation.<p>SSH into my server, ping the URL of the created Postgres instance: The DB&#x27;s IP is outside the CIDR range of the assigned subnet and unreachable.<p>What?<p>Deleted the instance, created another one, exact same settings. Worked this time around.<p>Support quality also varies extremely.<p>Out of 3 encounters, I had a competent person once.<p>Other two straight out said they have no idea what&#x27;s going on.
      • ArtTimeInvestor6 hours ago
        It is not possible to configure the setup without the graphical interface?<p>Are there cloud providers you prefer?
        • mkesper5 hours ago
          You could use their cloud API <a href="https:&#x2F;&#x2F;api.ionos.com&#x2F;docs&#x2F;cloud&#x2F;v6&#x2F;" rel="nofollow">https:&#x2F;&#x2F;api.ionos.com&#x2F;docs&#x2F;cloud&#x2F;v6&#x2F;</a> or e.g. terraform provider: <a href="https:&#x2F;&#x2F;docs.ionos.com&#x2F;reference&#x2F;config-management-tools&#x2F;config-management-tools" rel="nofollow">https:&#x2F;&#x2F;docs.ionos.com&#x2F;reference&#x2F;config-management-tools&#x2F;con...</a> I don&#x27;t have any practical experience with this provider, though.
  • usrme17 hours ago
    This is probably out of left field, but what is the benefit of having a naming scheme for nodes without any delimiters? Reading at a glance and not knowing the region name convention of a given provider (i.e. Hetzner), I&#x27;m at a loss to quickly decipher the &quot;&lt;region&gt;&lt;zone&gt;&lt;environment&gt;&lt;role&gt;&lt;number&gt;&quot; to &quot;euc1pmgr1&quot;. I feel like I&#x27;m missing something because having delimiters would make all sorts of automated parsing much easier.
    • BillFranklin17 hours ago
      Quicker to type and scan! Though I admit this is preference, delimiters would work fine too.<p>Parsing works the same but is based on a simple regex rather than splitting on a hyphen.<p>euc=eu central; 1=zone&#x2F;dc; p=production; wkr=worker; 1=node id
      • usrme17 hours ago
        Thanks for getting back to me! Now that you&#x27;ve written it out, it&#x27;s plainly obvious, but for me the readability and flexibility of delimiters beats the speed of typing and scanning. Many a times I&#x27;ve been grateful that I added delimiters because then I was no longer be hamstrung by any potential changes to the length of any particular segment within the name.
        • adastra227 hours ago
          You can more easily double-click-select the full hostname when there are no delimiters.
      • stackskipton11 hours ago
        Yea, not putting in delimiter and then us having to change our format has bitten me so many times. Delimiter or bust.
    • o11c15 hours ago
      You can treat the numeric parts as self-delimiting ... that leaves only the assumption that &quot;environment&quot; is a single letter.
  • sureglymop15 hours ago
    I went hetzner baremetal, set up a proxmox cluster over it and then have kubernetes on top. Gives me a lot of flexibility I find.
  • aliasxneo19 hours ago
    I’m planning on doing something similar but want to use Talos with bare metal machines. I suspect to see similar price reductions from our current EKS bill.
    • threeseed18 hours ago
      Depending on your cluster size I highly recommend Omni: <a href="https:&#x2F;&#x2F;omni.siderolabs.com" rel="nofollow">https:&#x2F;&#x2F;omni.siderolabs.com</a><p>It took minutes to setup a cluster and I love having a UI to see what is happening.<p>I wish there were more products like this as I suspect there will be a trend towards more self-managed Kubernetes clusters given how expensive the cloud is becoming.
    • MathiasPius17 hours ago
      I set up a Talos bare metal cluster about a year ago, and documented the whole process on my website. Feel free to reach out if you have any questions!
      • cedws10 hours ago
        Any thoughts&#x2F;feelings about Talos vs Bottlerocket?
  • Scotrix18 hours ago
    Very nicely written article. I’m also running a k8s cluster but on bare metal and qemu-kvms for the base load. Wonder why you would chose VMs instead of bare metal if you looking for cost optimisation (additional overhead maybe?), could you share more about this or did I miss it?
    • BillFranklin18 hours ago
      Thank you! The cloud servers are sufficiently cheap for us that we could afford the extra flexibility we get from them. Hetzner can move around VMs without us noticing but in contrast they are rebooting a number of metal machines for maintenance now and for the last little while, which would have been disruptive especially during the migration. I might have another look next year at metal but I’m happy with the cloud VMs currently.
      • karussell16 hours ago
        Note, they usually do not reboot or touch your servers. But yes, the current maintenance of their metal routers (rare, like once every 2 years) requires you to juggle a bit with different machines in different datacenters.
  • dvfjsdhgfv20 hours ago
    &gt; Hetzner volumes are, in my experience, too slow for a production database. While you may in the past have had a good experience running customer-facing databases on AWS EBS, with Hetzner&#x27;s volumes we were seeing &gt;50ms of IOWAIT with very low IOPS.<p>There is a surprisingly easy way to address this issue: use (ridiculously cheap) Hetzner metal machines as nodes. The ones with nvme storage offer excellent performance for dbs and often have generous amounts of RAM. I&#x27;d go as far as to say you&#x27;d be better off to invest in two or more beefy bare metal machines for a master-replica(s) setup rather than run the db on k8s.<p>If you don&#x27;t want to be bothered with the setup, you can use one of many modern packages such as Pigsty: <a href="https:&#x2F;&#x2F;pigsty.cc&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pigsty.cc&#x2F;</a> (not affiliated but a huge fan).
    • threeseed18 hours ago
      There are plenty of options for running a database on Kubernetes whilst using local NVMe storage.<p>There are just pinning the database pods to specific nodes and using a LocalPathProvisioner or distributed solutions like JuiceFS, OpenEBS etc.
    • BillFranklin20 hours ago
      Thanks, hadn’t heard of pigsty. As you say, I had to use nvme ssds for the dbs, the performance is pretty good so I didn’t look to get metal nodes.
    • gourneau7 hours ago
      Thanks for the Pigsty link. I have been a big fan of running Postgres on metal machines.
  • czhu129 hours ago
    Funnily enough, we made the exact same transition from heroku to DigitalOceans managed Kubernetes service, and saved about 75%. Presumably this means that had we moved from heroku to hetzner, it would have been 93% savings!<p>The costs of cloud hosting are totally out of control, would love to see more efforts that lets developers move down the stack.<p>I’ve been humbly working on <a href="https:&#x2F;&#x2F;canine.sh" rel="nofollow">https:&#x2F;&#x2F;canine.sh</a> which basically provides a Heroku like interface to any K8 cluster
  • kakoni17 hours ago
    Anybody running k3s&#x2F;k8s on Hetzner using cax servers? How&#x27;s that working?
  • devops00017 hours ago
    Did you try Cloud66 for deploy?
  • cjr20 hours ago
    What about cluster autoscaling?
    • BillFranklin20 hours ago
      I didn’t touch on that in the article, but essentially it’s a one line change to add a worker node (or nodes) to the cluster, then it’s automatically enrolled.<p>We don’t have such bursty requirements fortunately so I have not needed to automate this.
    • preisschild4 hours ago
      Works rather well. I use CAPI + Cluster-Autoscaler + Talos and new nodes are provisioned and ready within 2-3 minutes.
  • aravindputrevu17 hours ago
    Do you know that they are cutting their free tier bandwidth? Did not read too much into it, but heard a few friends were worried about.<p>End of they day, they are a business!
  • segmondy19 hours ago
    Great write up Bill!
  • postepowanieadm20 hours ago
    Lovely website.
  • MuffinFlavored17 hours ago
    <a href="https:&#x2F;&#x2F;github.com&#x2F;puppetlabs&#x2F;puppetlabs-kubernetes">https:&#x2F;&#x2F;github.com&#x2F;puppetlabs&#x2F;puppetlabs-kubernetes</a><p>What do the fine people of HN think about the size&#x2F;scope&#x2F;amount of technology of this repo?<p>It is referenced in the article here: <a href="https:&#x2F;&#x2F;github.com&#x2F;puppetlabs&#x2F;puppetlabs-kubernetes&#x2F;compare&#x2F;main...bilbof:puppetlabs-kubernetes:main#diff-50ae7fb3724b662b58dbc1c71663cb16a484ab36aecd5a11317fb14465f847fa">https:&#x2F;&#x2F;github.com&#x2F;puppetlabs&#x2F;puppetlabs-kubernetes&#x2F;compare&#x2F;...</a>
    • KaiserPro1 hour ago
      Puppet&#x27;s original design was that it was meant to be agent based on the things it was meant to configure. It was never very good at bringing up stuff before the agent could be connected.<p>The general flow was Imager-&gt;pre-configured puppet agent-&gt;connect to controller-&gt;apply changes to make it perform as <i>x</i><p>originally it never really had the capacity to kick off the imaging&#x2F;instantiation. THis meant that it scaled better (shared state is better handled than ansible)<p>However ansible shined because although it was a bastard to get running on more than a couple of hundred hosts in any speed, you could tell it to spin up 100x EC2(or equivalent) machines and then transform them into which every role that was needed. In puppet that was impossible to do in one go.<p>I assume thats changed, but I don&#x27;t miss puppet.
    • mkesper5 hours ago
      Honestly I was surprised to hear about puppet at all. Thought that was dead and buried, like chef.
  • Iwan-Zotow14 hours ago
    this is good<p>well, running on bare metal would be even better