26 comments

  • MeetingsBrowser7 hours ago
    I use claude code every day, I&#x27;ve written plugins and skills, use MCP servers, subagent workflows, and filled out the &quot;Find your level&quot; quiz as such.<p>According to the quiz, I am a beginner!
    • ryanchoi30 minutes ago
      I was a bit confused by the quiz results as well. But it&#x27;s just a bug :)<p>Level ranges for the 10 questions (the score ranges are in the html): Beginner 0~3, Intermediate 4~7, Advanced 8~10<p>Makes sense. But:<p>- You get 0 points if you press A&#x2F;B, 1 point if you press C, 2 points if you press D<p>- Scoring uses a fallback to Beginner level if your total score exceeds the expected max which is 10<p>`const t = Object.values(r).find(a =&gt; l &gt;= a.min &amp;&amp; l &lt;= a.max) ?? r.beginner`<p>Pressed D 5x then A 5x, got Advanced
    • annie5112667282 hours ago
      A lot of these quizzes end up measuring whether you use the author&#x27;s preferred workflow, not whether you&#x27;re actually effective with the tool.<p>Those aren&#x27;t the same thing.
    • noosphr56 minutes ago
      Just ask it to fill it in for you.<p>Master level.
    • BloondAndDoom2 hours ago
      I think it’s just buggy, I had the same results despite of knowing every single question in depth other than building a plugin.
    • Esophagus47 hours ago
      Did anyone not get beginner?<p>I got it as well.
      • Uncorrelated6 hours ago
        I responded with a mix of mostly B and C answers and got “advanced.” Yet, as pointed out by another commenter, selecting all D answers (which would make you an expert!) gets you called a beginner.<p>I can only assume the quiz itself was vibe-coded and not tested. What an incredible time we live in.
        • taftster5 hours ago
          Or that it&#x27;s taking into account the Dunning-Kruger effect. In that, if you think you are an expert in all cases, you are really a beginner in everything.
      • the_other6 hours ago
        I&#x27;m a beginner with agentic coding. I vibe code something most days, from a few lines up to refactors over a few files. I don&#x27;t knowingly use skills, rarely _choose_ to call out to tools, haven&#x27;t written any skills and only one or two ad hoc scripts, and have barely touched MCPs (because the few I&#x27;ve used seem flaky and erratic). I answered as such and got... intermediate.
  • npilk8 hours ago
    Strongly agree with the sentiment, but I&#x27;d say if you&#x27;re familiar with the terminal you may as well just install it and truly &#x27;learn by doing&#x27;!<p>I could see this being great for true beginners, but for them it might be nice to have even some more basics to start (how do I open the terminal, what is a command, etc).
    • heyethan2 hours ago
      I feel that the tricky part now is you can “learn by doing” without ever knowing if you’re doing it right. You get something working, but your mental model can be completely off.
  • theptip2 hours ago
    I’m missing something here. Isn’t the best “doing” to actually use Claude to build stuff? The barrier to entry is so low.<p>Why do you need to memorize slash commands? They are somewhat useful and you can just read them from the autocomplete.
  • b2124 hours ago
    I feel there’s a lot of marketing and pure bullshit around LLMs configuration and conventions.<p>Law of diminishing returns applies here perfectly - you can learn prompting in 2 hours and get 400% performance boost or spend weeks on subagents and skills and Opus and st best it’s another 50% boost but not really - in my case in a good day Sonnet is a genius and on a bad one Opus is an moron. One day the same query consumes 6k tokens, the next 700k.<p>They want to get you hooked and need to show investors they’re super busy but in fact it’s mostly smoke and mirrors. And prompting, once you learn to give proper context, is far from rocket science.
  • Yiin7 hours ago
    find your level -&gt; answer D to everything -&gt; you&#x27;re a beginner! And I thought I have high standards...
  • yoyohello138 hours ago
    People will do anything to avoid RTFM.
    • DrewADesign5 hours ago
      Many of the same people probably use LLMs to avoid having to WTFM, so I’m not surprised.
  • jurakovic7 hours ago
    Is that quiz correct? I have answered mostly C or D and maybe a few of B, but still got &quot;Beginner&quot;. How?!
    • roxolotl6 hours ago
      The quiz is super weird too. They A-C are knowledge questions D is something you’ve done.
  • deemeng1 hour ago
    thank you to OP -- this was a really easy way to look up how plugins inside of the claude code
  • fercircularbuf6 hours ago
    I love the pedagogical approach here and the ability to easily hone in on your level before diving into content. Your approach would work really well for other subjects as well.
  • tourist_petr982 hours ago
    This is awesome, thanks for sharing!
  • grewil27 hours ago
    Side note: I don’t know what Anthropic changed but now Claude Code consumes the quota incredibly fast. I have the Max5 plan, and it just consumed about 10% of the session quota in 10 minutes on a single prompt. For $100&#x2F;month, I have higher expectations.
    • landr0id7 hours ago
      Relevant: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ClaudeAI&#x2F;comments&#x2F;1s7zgj0&#x2F;investigating_usage_limits_hitting_faster_than&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ClaudeAI&#x2F;comments&#x2F;1s7zgj0&#x2F;investiga...</a><p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ClaudeAI&#x2F;comments&#x2F;1s7mkn3&#x2F;psa_claude_code_has_two_cache_bugs_that_can&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ClaudeAI&#x2F;comments&#x2F;1s7mkn3&#x2F;psa_claud...</a>
      • onemoresoop4 hours ago
        That explains things. Im getting this: API Error: 400 {&quot;error&quot;:{&quot;message&quot;:&quot;Budget has been exceeded! Current cost: 271.29866200000015, Max budget: 200.0&quot;,&quot;type&quot;:&quot;budget_exceeded&quot;,&quot;param&quot;:null,&quot;code&quot;:&quot;400&quot;}}<p>So I completetly ran out of tokens and haven’t even used it at all for the past couple of days, and last week my usage was very light. Let me scratch that, all my usage has been very light since I got this plan at work. It’s a an enterprise subscription I believe, hard to tell since it doesn’t connect directly to Anthropic, rather it goes through a proxy on Azure.<p>Im not liking this at all and all, so flaky and opaque. Not possible to get a breakdown on what the usage went on, right? Do we have to contact Anthropic for a refund or will they restore the bogus usage?
      • nixpulvis6 hours ago
        This is a serious problem with the fact that it&#x27;s nearly impossible to understand what a &quot;token&quot; is and how to tame their use in a principled way.<p>It&#x27;s like if cars didn&#x27;t advertise MPG, but instead something that could change randomly.
        • claw-el5 hours ago
          Also, certain models are more verbose than the others. We are basically at the mercy of a model who likes to ramble a lot.
          • konfusinomicon4 hours ago
            im fiarly certain the knob on the machine that controls length of redundant comments and docblocks is cranked to 11. it makes me curious how much of their bottom line is driven by redundant comment output.
        • amitprasad5 hours ago
          Relevant post: <a href="https:&#x2F;&#x2F;modal.com&#x2F;blog&#x2F;dollars-per-token-considered-harmful" rel="nofollow">https:&#x2F;&#x2F;modal.com&#x2F;blog&#x2F;dollars-per-token-considered-harmful</a><p>(disclaimer: I work with the author)
          • nixpulvis4 hours ago
            I completely agree that requests are what should be charged for. But I think there are two things, given that requests aren&#x27;t all going to cost the same amount:<p>1. Estimate free invoicing the requests and letting users figure it out after the fact. 2. Somehow estimating cost and telling users how much a request will cost.<p>We have 1, we want 2.
        • uoaei6 hours ago
          Like if cars measured fuel efficiency or range using the knobs in the tread on your tire.
        • smohare2 hours ago
          [dead]
        • clawfund5 hours ago
          [flagged]
      • prodigycorp2 hours ago
        Anthropic really needs to opensource claude code.<p>One of the biggest turnoffs as a claude code user is the CC community cargo culting the subreddit because community outreach is otherwise poor.
    • conception7 hours ago
      I noticed 1M context window is default and no way not to use it. If your context is at 500-900k tokens every prompt, you’re gonna hit limits fast.
      • Wowfunhappy6 hours ago
        I had to double check that they&#x27;d removed the non-1M option, and... WTF? This is what&#x27;s in `&#x2F;config` → `model`<p><pre><code> 1. Default (recommended) Opus 4.6 with 1M context · Most capable for complex work 2. Sonnet Sonnet 4.6 · Best for everyday tasks 3. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $3&#x2F;$15 per Mtok 4. Haiku Haiku 4.5 · Fastest for quick answers </code></pre> So there&#x27;s an option to use non-1M Sonnet, but not non-1M Opus?<p>Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the <i>worse</i> model with that much context, you have to pay extra? Why the heck would anyone do that?<p>The screen does also say &quot;For other&#x2F;previous model names, specify with --model&quot;, so I assume you can use that to get 200K Opus, but I&#x27;m very confused why Anthropic wouldn&#x27;t include that in the list of options.<p>What a strange UX decision. I&#x27;m not personally annoyed, I just think it&#x27;s bizarre.
        • retrofuturism6 hours ago
          `&#x2F;model opus` sets it to the original non-1M Opus... for now.
          • windexh8er5 hours ago
            Thanks. I quickly burned through $100 in credit when I started using Opus 4.6 in OpenCode via OpenRouter. My session stopped and was getting an error not representative of credit availability, so was surprised after a few minutes when I finally realized Opus just destroyed those credits on a bullshit reasoning loop it got stuck in. Anthropic seems to know that the expanded context is better for their bottom line as they&#x27;ve defaulted it now.<p>And as others have said it&#x27;s very easy to burn token usage on the $100&#x2F;month plan. It&#x27;s getting to the point where it&#x27;s going to very much make sense to do model routing when using coding tooling.
          • weird-eye-issue59 minutes ago
            Not sure why you were downvoted because this is actually correct. Can also use --model opus
      • aberoham7 hours ago
        export CLAUDE_CODE_DISABLE_1M_CONTEXT=1
        • teaearlgraycold7 hours ago
          Anthropic is not building good will as a consumer brand. They&#x27;ve got the best product right now but there&#x27;s a spring charging behind me ready to launch me into OpenCode as soon as the time is right.
          • kylecazar7 hours ago
            Would you use Opus if you switched to OpenCode?
            • teaearlgraycold6 hours ago
              I&#x27;d like to use Opus with OpenCode right now to combine the best TUI agent app with the best LLM. But my understanding is Anthropic will nuke me from orbit if I try that.
              • joecot6 hours ago
                You can use Opus with OpenCode anytime you want, just not with the Claude plan. You can use it via API with any provider, including Anthropic&#x27;s API. You can use it with Github Copilot&#x27;s plan. The only thing you can&#x27;t do without getting banned is use OpenCode with one of Claude&#x27;s plans.
                • nurettin2 hours ago
                  I keep seeing this &quot;you can use the inconvenient and unpredictably costly way all you want&quot; pedantic kneejerk response so often lately.<p>It&#x27;s like saying well humans <i>can</i> fly with a paraglider. It is correct and useless. Most here won&#x27;t have cash to burn with unbounded opus api usage.
              • corford6 hours ago
                OpenCode with a Copilot Business sub and Opus 4.6 as the model works well
                • teaearlgraycold3 hours ago
                  I&#x27;m looking at their plans (<a href="https:&#x2F;&#x2F;github.com&#x2F;features&#x2F;copilot&#x2F;plans" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;features&#x2F;copilot&#x2F;plans</a>) it seems like the limits might be pretty low, even with the Pro+ plan which is 2x the cost of Claude Pro. It seems like Claude Pro might be 10-20x the Opus tokens for only twice the price.
      • nextaccountic4 hours ago
        do you pay for the full context every prompt? what happened with the idea of caching the context server side?
        • weird-eye-issue58 minutes ago
          It helps a ton but it doesn&#x27;t last forever and you still have to pay to write to the cache
        • davesque3 hours ago
          You don&#x27;t. Most of the time (after the first prompt following a compaction or context clear) the context prefix is cached, and you pay something like 10% of the cost for cached tokens. But your total cost is still roughly the area under a line with positive slope. So increases quadratically with context length.
      • zhangchen4 hours ago
        [dead]
    • no1youknowz7 hours ago
      I&#x27;ve been jumping from Claude -&gt; Gemini -&gt; GPT Codex. Both Claude and Gemini really reduced quotas and so I cancelled. Only subbed GPT for the special 2x quota in March and now my allocation is done as well.<p>I decided to give opencode go a try today. It&#x27;s $5 for the first month. Didn&#x27;t get much success with Kimi K2, overly chatty, built too complex solutions - burned 40% of my allocation and nothing worked. ¯\_(ツ)_&#x2F;¯.<p>But Minimax m2.7. Wow, it feels just like Claude Opus 4.6. Really has serious chops in Rust.<p>Tomorrow&#x2F;Wednesday will try a month of their $40 plan and see how it goes.
      • victorbjorklund7 hours ago
        Minimax 2.7 is great. Not close to Claude but good enough for a lot of coding tasks.
        • girvo4 hours ago
          GLM-5 (and 5.1) is surprisingly impressive too I’m finding.
        • HDBaseT7 hours ago
          [dead]
    • zar10485763 hours ago
      Have had similar issues with costs sometimes being all over the map. I suspect that the major providers will figure this out as it’s an important consideration in the enterprise setting
    • lkbm6 hours ago
      I&#x27;ve heard this a few times lately, but this past weekend I built a website for a friend&#x27;s birthday, and it took me several hours and many queries to get through my regular paid plan. I just use default settings (Sonnet 4.6, medium effort, thinking on).<p>I&#x27;m guessing Opus eats up usage much, much faster. I don&#x27;t know what&#x27;s going on, since a lot of people are hitting limits and I don&#x27;t seem to be.
      • notatoad6 hours ago
        what they changed was peak vs off-peak usage metering.<p>using it on the weekend gets you more use than during weekdays 9-5 in US eastern time.
        • matheusmoreira5 hours ago
          I waited until off peak hours to use Opus 4.6 to do some research. One prompt consumed 100% of my 5h limit and 15% of my weekly usage. Even off peak it&#x27;s still insane. Opus didn&#x27;t even manage to finish what it was doing.
        • hrimfaxi5 hours ago
          I&#x27;m surprised it&#x27;s during east coast working hours and not west coast.
          • notatoad5 hours ago
            the speculation i read was that it&#x27;s trading hours, and they&#x27;re getting a lot of load from the finance industry
        • lkbm5 hours ago
          Technically, this was Friday morning, so I think I was still in peak hours.
      • teaearlgraycold5 hours ago
        Even with Opus I don’t usually hit limits on the standard plan. But I am not doing professional work at the moment and I actually alternate between using the LLM and reading&#x2F;writing code the old fashioned way. I can see how you’d blow through the quota quickly if you try to use LLMs as universal problem solvers.
    • xantronix4 hours ago
      This is a very normal thing to be the top comment on an article on how to use Claude Code.
    • outside12346 hours ago
      They need to get to profitability because that sweet sweet Saudi subsidy cash is gone gone.
      • kderbyma5 hours ago
        They wont be profitable at this point...they just dont realise they are eating their own tail.
    • manmal7 hours ago
      Looks like they are falling victim to their own slop. This smells a lot like the Amazon outages caused by mandated clanker usage.
    • maximinus_thrax5 hours ago
      I&#x27;m very surprised to see enshittification starting so early. I was expecting at last 3-4 years of VC subsidized gravy train.
      • kderbyma5 hours ago
        This has been 6 months of constant decline so at this point I am wondering when they cliff it like wework
    • skwallace367 hours ago
      things are rough out there right now
    • irishcoffee5 hours ago
      Reminds me of when I would mess with my friends on &quot;pay per text&quot; plans by sending them 10 text messages instead of just 1. I should start paying attention to unattended laptops and blow up some token usage in the same manner.<p>It&#x27;s almost like an evolution of bobby tables.
    • alcor-z3 hours ago
      [dead]
    • LeonTing10103 hours ago
      [dead]
  • edinetdb4 hours ago
    [dead]
  • Sim-In-Silico4 hours ago
    [dead]
  • aplomb10266 hours ago
    [dead]
  • imta717705 hours ago
    [dead]
  • MeetRickAI5 hours ago
    [dead]
  • maxbeech8 hours ago
    [dead]
  • sta1n2 hours ago
    [dead]
  • syntheticmind5 hours ago
    [dead]
  • techpulselab5 hours ago
    [dead]
  • Adam_cipher6 hours ago
    [dead]
    • nixpulvis6 hours ago
      I think it&#x27;s funny and interesting how LLMs are commoditizing information generation. It&#x27;s completely expected, but also somewhat challenging to figure out what the best combination of &quot;learning&quot; &quot;fact&quot; systems is.<p>I&#x27;d be curious to know more about how this compares to other approaches.
    • jatora6 hours ago
      AI comment.
    • huflungdung6 hours ago
      [dead]
  • wetpaws8 hours ago
    [dead]
  • nickphx7 hours ago
    Why wpuld anyone want to &quot;learn&quot; how to use some non-deterministic black box of bullshit that is frequently wrong? When you get different output fkr the same input, how do you learn? How is that beneficial? Why would you waste your time learning something that is frequently changing at the whims of some greedy third party? No thanks.
    • simonw5 hours ago
      One of the things you can learn is how to get consistently useful results out of it despite it being a non-deterministic black box.
    • ForHackernews7 hours ago
      Because you will soon be working for it unless you learn to make it work for you.
      • i_love_retros7 hours ago
        It&#x27;s fucking insane that we all have to pay rent every month to an AI company just to keep doing our jobs.
        • sidrag226 hours ago
          there is certainly a future where this isn&#x27;t the case. Learning how to use AI and use it in your workflows will likely for sure be a part of any serious dev&#x27;s future, but being beholden to a data center does not seem to reflect reality. Consider all the 5m-8m models and how powerful they are today compared to what the best models did 2 years ago. If you want to stay absolute bleeding edge model wise, sure you&#x27;ll be stuck at a data center for some time...<p>Why isn&#x27;t this just kinda seen as a repeat of the original birth of computers? Consider the IBM 350 (3.5mb) rented in the 50s for thousands per month. Now I have a drawer filled with SD cards that go up to 128gb that i cant even give away.
        • nice_byte7 hours ago
          you literally don&#x27;t have to. you can literally just keep doing your job the way that you always have.
          • i_love_retros6 hours ago
            I probably won&#x27;t have a job for much longer if I do that, unfortunately
            • nice_byte6 hours ago
              I don&#x27;t think that is true.
        • HDBaseT7 hours ago
          [dead]
  • AugSun2 hours ago
    No. 100% no. Learn <i>the art of programming</i>. Read K&amp;R. In 5 years we will see &quot;new is old&quot; again. Tokens will become prohibitively expensive and, once more, another $steve.ballmer.2.0 will be yelling &quot;developers ... developers&quot;. And Claude Code ... will become another &quot;pentesting&quot; &#x2F; &quot;linting&quot; tool.
  • mrtksn8 hours ago
    Are people again learning a new set of tools? Just tell the AI what you want, if the AI tool doesn&#x27;t allow that then tell another Ai tool to make you a translation layer that will convert the natural language to the commands etc. What&#x27;s the point of learning yet another tool?
    • faeyanpiraat8 hours ago
      I cannot decipher what you mean, have you mixed up the tabs, and wanted to post this somewhere else?<p>The linked site is a pretty good interactive Claude tutorial for beginners.
      • sznio8 hours ago
        I don&#x27;t understand the purpose of a tutorial for a natural language ai system.
        • simonw5 hours ago
          That&#x27;s like saying there&#x27;s no point in attending a lecture on &quot;how to get the best out of your time at University&quot; because University courses are taught in spoken language so you could just ask the professors.
        • rco87867 hours ago
          Claude Code is a tool that uses natural language ai systems. It itself is not a natural language ai system.
          • mrtksn7 hours ago
            The idea that AI can write code like a seasoned software developer but not being able to use its own tooling that can be learned through 11 chapters tutorial doesn&#x27;t make any sense.
        • arbitrary_name7 hours ago
          sounds like you might benefit from a tutorial!
      • mrtksn8 hours ago
        Nope, why would anybody type commands to a machine that does natural language processing? Just tell the thing what you want.
        • dsQTbR7Y5mRHnZv7 hours ago
          &quot;Part of the initial excitement in programming is easy to explain: just the fact that when you tell the computer to do something, it will do it. Unerringly. Forever. Without a complaint.<p>And that’s interesting in itself.<p>But blind obedience on its own, while initially fascinating, obviously does not make for a very likeable companion. What makes programming so engaging is that, while you can make the computer do what you want, you have to figure out how.&quot;[0]<p>- [0] <a href="https:&#x2F;&#x2F;www.brynmawr.edu&#x2F;inside&#x2F;academic-information&#x2F;departments-programs&#x2F;computer-science&#x2F;beauty-programming" rel="nofollow">https:&#x2F;&#x2F;www.brynmawr.edu&#x2F;inside&#x2F;academic-information&#x2F;departm...</a>
        • faeyanpiraat7 hours ago
          Yes, but you gotta learn what is possible.<p>I wouldn&#x27;t have the thought to say to the machine to compact its context if I didn&#x27;t know it has context and it can be compacted, right?
          • rzzzt6 hours ago
            Why do I need to tell the machine to compact its context? This feels like homework and&#x2F;or ceremony.
            • thejazzman5 hours ago
              Because the machine is a tool and tools use proper and improper usage.
          • mrtksn7 hours ago
            Good point, but IMHO the learning material for this should be the basics of LLM.
    • recursive5 hours ago
      I haven&#x27;t used Claude, but the problem seems to be not refusal, but cheerful failure. &quot;Sure, I&#x27;ll help you with that!&quot; And it produces something wrong in obvious and&#x2F;or subtle ways.
    • cyanydeez8 hours ago
      I think somewhere between 2016 and 2026 the market realized that programmers _love_ writing tools for themselves and others, and it went full bore into catering to the Bike Shedding economy, and now AI is accelerating this to an absurd degree.
      • mrtksn7 hours ago
        Me too, I love writing tools for myself and end up yak shaving all the time but why there&#x27;s a tutorial for a machine that understand human language? Just type down your inner monologue and it will do it.
        • sidrag225 hours ago
          honestly, the biggest reason i deep dove on proper .claude stuff, was because im a cheap ass. I saw someone mention their agents&#x2F; that delegates to cheaper models, and figure that was a way I could reign in my own overall usage, and its been true so far. Im sure im one of the very few heavy claude code users that still stubbornly sits on the pro version. It won&#x27;t be forever, if i land an important contract or job, I&#x27;ll pretty quickly hop to max or whatever, but for my own usage right now, im getting by.<p>Sure, maybe this stuff isn&#x27;t crazy relevant 2 years from now, but right now? Giving your agent a clean way to navigate and delegate tasks to keep that context window clean? its 100% vital.<p>edit: hop to max*
    • chunpaiyang5 hours ago
      [dead]
  • htx80nerd6 hours ago
    I continue to find the non-stop claude spam fascinating. Gemini and ChatGPT have been very good for my needs, Claude not so much. Every week, if not every day, Claude spam is all over this site. But barely a peep about Gemini or ChatGPT coding capabilities.
    • ekropotin5 hours ago
      That’s good to know your personal preferences. Please keep us posted!
    • 8b16380d5 hours ago
      Tool de jour, similar to web framework of the month etc. Gemini and ChatGPT are just as useful