25 comments

  • cube22228 hours ago
    It&#x27;s so nice that skills are becoming a standard, they are imo a much bigger deal long-term than e.g. MCP.<p>Easy to author (at its most basic, just a markdown file), context efficient by default (only preloads yaml front-matter, can lazy load more markdown files as needed), can piggyback on top of existing tooling (for instance, instead of the GitHub MCP, you just make a skill describing how to use the `gh` cli).<p>Compared to purpose-tuned system prompts they don&#x27;t require a purpose-specific agent, and they also compose (the agent can load multiple skills that make sense for a given task).<p>Part of the effectiveness of this, is that AI models are heavy enough, that running a sandbox vm for them on the side is likely irrelevant cost-wise, so now the major chat ui providers all give the model such a sandboxed environment - which means skills can also contain python scripts and&#x2F;or js scripts - again, much simpler, more straightforward, and flexible than e.g. requiring the target to expose remote MCPs.<p>Finally, you can use a skill to tell your model how to properly approach using your MCP server - which previously often required either long prompting, or a purpose-specific system prompt, with the cons I&#x27;ve already described.
    • NitpickLawyer3 hours ago
      On top of everything you&#x27;ve described, one more advantage is that you can use the agents themselves to edit &#x2F; improve &#x2F; add to the skills. One easy one to do is something like &quot;take the key points from this session and add the learnings as a skill&quot;. It works both on good sessions with new paths&#x2F;functionality and on &quot;bad&quot; sessions where you had to hand-hold the agent. And they&#x27;re pretty good at summarising and extracting tidbits. And you can always skim the files and do quick edits.<p>Compared to MCPs, this is a much faster and more approachable flow to add &quot;capabilities&quot; to your agents.
      • fizx2 hours ago
        Add reinforcement learning to figure out which skills are actually useful, and you&#x27;re really cooking.
        • NitpickLawyer1 hour ago
          DSPy with GEPA should work nicely, yeah. Haven&#x27;t tried yet but I&#x27;ll add it to my list. I think a way to share within teams is also low-hanging fruit in this space (outside of just adding them to the repo). Something more org-generic.
    • hu38 hours ago
      Perhaps you could help me.<p>I&#x27;m having a hard time figuring out how could I leverage skills in a medium size web application project.<p>It&#x27;s python, PostgreSQL, Django.<p>Thanks in advance.<p>I wonder if skills are more useful for non crud-like projects. Maybe data science and DevOps.
      • macNchz4 hours ago
        There’s nothing super special about it, it’s just handy if you have some instructions that you don’t need the AI to see <i>all</i> the time, but that you’d like it to have available for specific things.<p>Maybe you have a custom auth backend that needs an annoying local proxy setup before it can be tested—you don’t need all of those instructions in the primary agents.md bloating the context on every request, a skill would let you separate them so they’re only accessed when needed.<p>Or if you have a complex testing setup and a multi-step process for generating realistic fixtures and mocks: the AI maybe only needs some basic instructions on how to run the tests 90% of the time, but when it’s time to make significant changes it needs info about your whole workflow and philosophy.<p>I have a django project with some hardcoded constants that I source from various third party sites, which need to be updated periodically. Originally that meant sitting down and visiting a few websites and copy pasting identifiers from them. As AI got better web search I was able to put together a prompt that did pretty well at compiling them. With a skill I can have the AI find the updated info, update the code itself, and provide it some little test scripts to validate it did everything right.
        • hu32 hours ago
          Thanks. I think I could use skills as &quot;instructions I might need but I don&#x27;t want to clutter AGENTS.md with them&quot;.
          • Sammi23 minutes ago
            Yes exactly. Skills are just sub agents.md files + an index. The index tells the agent about the content of the .md files and when to use them. Just a short paragraph per file, so it&#x27;s token efficient and doesn&#x27;t take much of your context.<p>Poor man&#x27;s &quot;skills&quot; is just manually managing and adding different .md files to the context.<p>Importantly every time you instruct the agent to do something correctly that it did incorrectly before, you ask it to revise a relevant .md file&#x2F;&quot;skill&quot;, so it has that correction from now on. This is how you slowly build up relevant skills. Things start out as sections in your agents.md file, and then graduate to a separate file when they get large enough.
      • jonrosner8 hours ago
        you could for example create a skill to access your database for testing purposes and pass in your tables specifications so that the agent can easily retrieve data for you on the fly.
        • hu32 hours ago
          I made a small mcp script for database with 3 tools:<p>- listTables<p>- getTableSchema<p>- executeQuery (blocks destructive queries like anything containing DROP, DELETE, etc..)<p>I wouldn&#x27;t trust a textual instructions to prevent LLMs from dropping a table.
          • SatvikBeri1 hour ago
            That&#x27;s why I give the LLM a readonly connection
        • derrida6 hours ago
          Oooooo, woah, I didn&#x27;t really &quot;get it&quot; thanks for spelling it out a bit, just thought of some crazy cool experiments I can run if that is true.
          • dkdcio5 hours ago
            it’s also for (typically) longer context you don’t always want the agent to have in its context. if you always want it in context, use rules (memories)<p>but if it’s something more involved or less frequently used (perhaps some debugging methodology, or designing new data schemas) skills are probably a good fit
      • didibus2 hours ago
        There can be a Django template skill for example, which is just a markdown file which reminds the LLM the syntax of Django Templates and best practices for it. It could have an included script that the LLM can use to test a single template file for example.
      • freakynit7 hours ago
        Skills are not useful for single-shot cases. They are for: cross-team standardization (for LLM generated code), and reliable reusability of existing code&#x2F;learnings.
      • JamesSwift6 hours ago
        Skills are the matrix scene where neo learns kungfu. Imagine they are a database of specialized knowledge that can an agent can instantly tap into _on demand_.<p>The key here is “on demand”. Not every agent or convention needs to know kung fu. But when they do, a skill is waiting to be consumed. This basic idea is “progressive disclosure” and it composes nicely to keep context windows focused. Eg i have a metabase skill to query analytics. Within that I conditionally refer to how to generate authentication if they arent authenticated. If they are authenticated, that information need not be consumed.<p>Some practical “skills”: writing tests, fetching sentry info, using playwright (a lot of local mcps are just flat out replaced by skills), submitting a PR according to team conventions (eg run lint, review code for X, title matches format, etc)
        • aed6 hours ago
          Could you explain more about your metabase skill and how you use it? We use metabase (and generally love it) and I’m interested to hear about how other people are using it!
  • btown4 hours ago
    Something that’s under-emphasized and vital to understand about Skills is that, by the spec, there’s no RAG on the <i>content</i> of Skill code or markdown - the names and descriptions in <i>every</i> skill’s front-matter are included <i>verbatim</i> in your prompt, and that’s <i>all</i> that’s used to choose a skill.<p>So if you have subtle logic in a Skill that’s not mentioned in a description, or you use the skill body to describe use-cases not obvious from the front-matter, it may never be discovered or used.<p>Additionally, skill descriptions are all essentially prompt injections, whether relevant&#x2F;vector-adjacent to your current task or not; if they nudge towards a certain tone, that may apply to your general experience with the LLM. And, of course, they add to your input tokens on every agentic turn. (This feature was proudly brought to you by Big Token.) So be thoughtful about what you load in what context.<p>See e.g. <a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;codex&#x2F;blob&#x2F;a6974087e5c04fc711af68f70fe93f7f5d2b0981&#x2F;codex-rs&#x2F;core&#x2F;src&#x2F;skills&#x2F;render.rs#L16" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;codex&#x2F;blob&#x2F;a6974087e5c04fc711af68f...</a>
    • jimmydoe4 hours ago
      but that&#x27;s same for MCP and tools, no?
      • mkagenius3 hours ago
        Yes. Infact you can serve each Skill as a tool exposed via MCP if you want. I did the same to make Skills work with Gemini CLI (or any other tool that supports MCP) while creating open-skills.<p>1. Open-Skills: <a href="https:&#x2F;&#x2F;github.com&#x2F;BandarLabs&#x2F;open-skills" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;BandarLabs&#x2F;open-skills</a>
        • brumar8 minutes ago
          Interesting. Skills on MCP makes a lot of sense in some contexts.
      • wincy2 hours ago
        A consultant started recommending the Azure devops MCP and my context window would start around 25% full. It’s really easy to accidentally explode your token usage and destroy your context windows. Before I’d use az cli calls as needed and tell the agent to use the same, which used significantly less context and was more targeted.
    • erichocean4 hours ago
      Some agentic systems do apply RAG to skills, there&#x27;s nothing about skills that requires blind insertion into prompts.<p>This is really an agentic harness issue, not an LLM issue <i>per se</i>.<p>In 2026, I think we&#x27;ll see agentic harnesses much more tightly integrated with their respective LLMs. You&#x27;re already starting to see this, e.g. with Google&#x27;s &quot;Interactions&quot; API and how different LLMs expect CoT to be maintained.<p>There&#x27;s a lot of alpha in co-optimizing your agentic harness with how the LLM is RL-trained on tool use and reasoning traces.
  • zahlman3 hours ago
    Recently there was a submission (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45840088">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=45840088</a>) breaking down how agents are basically just a loop of querying a LLM, sometimes receiving a specially-formatted (using JSON in the example) &quot;request to use a tool&quot;, and having the main program detect, interpret and execute those requests.<p>What do &quot;skills&quot; look like, generically, in this framework?
    • colonCapitalDee2 hours ago
      Before the first loop iteration, the harness sends a message to the LLM along the lines of.<p>&lt;Skills&gt;<p><pre><code> &lt;Skill&gt; &lt;Name&gt;postgres&lt;&#x2F;Name&gt; &lt;Description&gt;Directions on how to query the pre-prod postgres db&lt;&#x2F;Description&gt; &lt;File&gt;skills&#x2F;postgres.md&lt;&#x2F;File&gt; &lt;&#x2F;Skill&gt; </code></pre> &lt;&#x2F;Skills&gt;<p>The harness then may periodically resend this notification so that the LLM doesn&#x27;t &quot;forget&quot; that skills are available. Because the notification is only name + description + file, this is cheap r.e tokens. The harness&#x27;s ability to tell the LLM &quot;IMPORTANT: this is a skill, so pay attention and use it when appropriate&quot; and then periodically remind them of this is what differentiates a proper Anthropic-style skill from just sticking &quot;If you need to do postgres stuff, read skills&#x2F;postgres.md&quot; in AGENTS.md. Just how valuable is this? Not sure. I suspect that a sufficiently smart LLM won&#x27;t need the special skill infrastructure.<p>(Note that skill name is not technically required, it&#x27;s just a vanity &#x2F; convenience thing).
      • zahlman2 hours ago
        &gt; The harness&#x27;s ability to tell the LLM &quot;IMPORTANT: this is a skill, so pay attention and use it when appropriate&quot; and then periodically remind them of this is what differentiates<p>... And do we know how it does that? To my understanding there is still no out-of-band signaling.
        • afro881 hour ago
          A lot of tools these days put an extra &lt;system&gt; message into the conversation periodically that the user never sees. It fights against context rot and keeps important things fresh.
        • unbelievably1 hour ago
          [dead]
    • didibus3 hours ago
      The agent can selectively loads one or more of the &quot;skills&quot;, which means it&#x27;ll pull it&#x27;s prompt once it decided that it should be loaded, and the skill can have accompanying scripts that the prompt also describes to the LLM.<p>So it&#x27;s just like a standard way to bring in prompts&#x2F;scripts to the LLM with support from the tooling directly.
  • freakynit8 hours ago
    I already was doing something similar on a regular basis.<p>I have many &quot;folders&quot;... each with a README.md, a scripts folder, and an optional GUIDE.md.<p>Whenever I arrive at some code that I know can be reused easily (for example: clerk.dev integration hat spans frontend and backend both), I used to create a &quot;folder&quot; of the same.<p>When needed, I used to just copy-paste all the folder content using my <a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;merge-to-md" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;merge-to-md</a> package.<p>This has worked flawlessly well for me uptil now.<p>Glad we are bringing such capability natively into these coding agents.
    • diamondfist2525 minutes ago
      For some reason, what you said here just explains what skills are in an eil5 way that I finally can understand
  • andybak7 hours ago
    Skills, plugins, apps, connectors, MCPs, agents - anyone else getting a bit lost?
    • Frost1x7 hours ago
      In my opinion it’s to some degree an artifact of immature and&#x2F;or rapidly changing technology. Basically not many know what the best approach is, all the use cases aren’t well understood, and things are changing so rapidly they’re basically just creating interfaces around everything so you can change flow in and out of LLMs any way you may desire.<p>Some paths are emerging popular, but in a lot of cases we’re still not sure even these are the long term paths that will remain. It doesn’t help that there’s not a good taxonomy (that I’m aware of) to define and organize the different approaches out there. “Agent” for example is a highly overloaded term that means a lot of things and even in this space, agents mean different things to different groups.
      • nlawalker4 hours ago
        I liken the discovery&#x2F;invention of LLMs to the discovery&#x2F;invention of the electric motor - it&#x27;s easy to take things like cars, drills, fans, pumps etc. for granted now, and all of the ergonomics and standards around them seem obvious in this era, but it took quite a while to go from &quot;we can put power in this thing and it spins&quot; to the state we&#x27;re in today.<p>For LLMs, we&#x27;re just about at the stage where we&#x27;ve realized we can jam a sharp thing in the spinny part and use it to cut things. The race is on not only to improve the motors (models) themselves, but to invent ways of holding and manipulating and taking advantage of this fundamental thing that feel so natural that they seem obvious in hindsight.
    • didibus2 hours ago
      None of them matter that much. They&#x27;re all just ways to bring in context. Think of them as conveniences.<p>Tools are useful so the AI can execute commands, but beyond that it&#x27;s just ways to help you build the context for your prompt. Either pulling in premade prompts that provides certain instructions or documentation, or providing more specialized tools for the model to use along with instructions on using those tools.
    • not_a_toaster7 hours ago
      They’re all bandaids
      • throwuxiytayq6 hours ago
        Just like C++, JavaScript and every Microsoft product in existence
    • iLoveOncall7 hours ago
      All marketing names for APIs and prompts. IMO you don&#x27;t need to even try to follow, because there&#x27;s nothing inherently new or innovative about any of this.
    • maddmann7 hours ago
      It reminds me of llm output at scale. Llms tend to produce a lot of similar but slightly different ideas in a codebase, when not properly guided.
    • ksdnjweusdnkl216 hours ago
      It&#x27;s like JS frameworks. Just wait until a React emerges and get up to speed with that later.
      • andybak5 hours ago
        That&#x27;s funny. My reaction to react emerging was to run away from JS frameworks entirely.
      • riffraff6 hours ago
        React itself took a few years for react to decide how it should work (hooks not classes etc).
        • tartoran6 hours ago
          Probably same will follow with LLMs. If you find something that works for you, sorry but that will change.
  • astra905 hours ago
    I think Skills could turn into something like open source libraries: standardized solutions to common problems, often written by experts.<p>Imagine having Skills available that implements authentication systems, multi-tenancy, etc.. in your codebase without having to know all the details about how to do this securely and correctly. This would probably boost code quality a lot and prevent insecure&#x2F;buggy vibe coded products.
    • JimDabell5 hours ago
      And then you make a global index of those skills available to models, where they can search for an appropriate skill on demand, then download and use them automatically.<p>A lot of the things we want continuous learning for can actually be provided by the ability to obtain skills on the fly.
  • orliesaurus5 hours ago
    If there was a marketplace or directory of skills.md files that were ranked with comments, it would be a good idea for the propagating of this tech
    • NitpickLawyer2 hours ago
      It would be trivial to create something like this but there are a few major problems with running such a platform that I think makes it not worth while for anyone (maybe some providers will try it, but it&#x27;s still tough).<p>- you will be getting a TON of spam. Just look at all the MCP folks, and how they&#x27;re spamming everywhere with their claude-vibed mcp implementation over something trivial.<p>- the security implications are enormous. You&#x27;d need a way to vet stuff, moderate, keep track of things and so on. This only compounds with more traffic, so it&#x27;d probably be untenable really fast.<p>- there&#x27;s probably 0 money in this. So you&#x27;d have to put a lot of work in maintaining a platform that attracts a lot of abuse&#x2F;spam&#x2F;prompt kiddies, while getting nothing in return. This might make sense to do for some companies that can justify this cost, but at that point, you&#x27;d be wondering what&#x27;s in it for them. And what control do they exert on moderation&#x2F;curation, etc.<p>I think the best we&#x27;ll get in this space is from &quot;trusted&quot; entities (i.e. recognised coders &#x2F; personalities &#x2F; etc), from companies themselves (having skills in repos for known frameworks might be a thing, like it is with agents.md), and maybe from the token providers themselves.
    • dkdcio5 hours ago
      ask, receive! <a href="https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;skills" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;skills</a><p>not ranked with comments but I’d expect solid quality from these and they should “just work” in Codex etc.
      • LordGrey5 hours ago
        It looks like the Codex version is <a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;skills" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;skills</a>.
    • relativeadv5 hours ago
      it feels like people keep attempting this idea, largely because its easy to build, but in practice people aren&#x27;t interested using others&#x27; prompts because the cost to create a customized skill&#x2F;gpt&#x2F;prompt&#x2F;whatever is near zero
      • true2octave5 hours ago
        People want inspiration rather than off-the-shelf prompts<p>More like a gallery than a marketplace
    • nickdichev4 hours ago
      I created a skill to write skills (based on the Anthropic docs). I think the value is really in making the skills work for your workflows and code base
  • tacone2 hours ago
    I don&#x27;t understand how skills are different than just instructing your model to read all the front-matters from a given folder on your filesystem and then decide if they need to read the file body.
    • pests1 hour ago
      That is basically what it is tho.<p>One difference is the model might have been trained&#x2F;fine-tuned to be better at &quot;read all the front-matters from a given folder on your filesystem and then decide...&quot; compared a model with those instructions only in its context.<p>Also, does your method run scripts and code in any kind of sandbox or other containment or do you give it complete access to your system? #yolo
    • shimman1 hour ago
      Yes I&#x27;m confused as well, it feels like it&#x27;s still all prompting which isn&#x27;t new or different in the LLM space.
      • mbreese47 minutes ago
        It’s all just loading data into the context&#x2F;conversation. Sometimes as part of the chat response the LLM will request for the client do something - read a file, call a tool, etc. The results of which end up back in the context as well.
  • ithkuil5 hours ago
    I wonder if generated skills could be useful to codify the outcome of long sessions where the agent has tried a bunch of things and then finally settled on a solution based on a mixture of test failures and user feedback
    • dkdcio5 hours ago
      yeah I have a “meta” skill and often use it after a session to instruct CC to update its own skills&#x2F;rules. get the flywheel going
  • ericflo2 hours ago
    People are really misunderstanding Skills, in my opinion. It&#x27;s not really about the .md file. It&#x27;s about the bundling of code and instructions. Skills assume a code execution environment.
    • chickensong35 minutes ago
      You could already pre-approve an executable and just call that from your prompt. The context savings by adding&#x2F;indexing metadata and dynamically loading the rest of the content as-needed is the big win here IMHO.
  • arnabgho4 hours ago
    Anthropic: Chief Product Officer of OpenAI
    • jimmydoe4 hours ago
      even better, compensation free
  • mellosouls4 hours ago
    How can skills be monetised by creators?<p>Obviously they are empowering Codex and Claude etc, and many will be open source or free.<p>But for those who have commercial resources or tools to add to the skills choice, is there documentation for doing that smoothly, or a pathway to it?<p>I can see at least a couple of ways it might be done - skills requiring API keys or or other authentication approaches, but this adds friction to an otherwise smooth skill integration process.<p>Having instead a transparent commission on usage sent to registered skill suppliers would be much cleaner but I&#x27;m not confident that would be offered fairly, and I&#x27;ve seen no guidance yet on plans in that regard.
  • pupppet5 hours ago
    How are skills different than tool&#x2F;function calling?
    • mkagenius44 minutes ago
      You can achieve what Skills achieve via function calling somewhat.<p>I&#x27;ve this mental map:<p>Frontmatter &lt;---&gt; Name and arguments of the function<p>Text part of Skill md &lt;---&gt; description field of the function<p>Code part of the Skill &lt;---&gt; body of the function<p>But the function wouldn&#x27;t look as organised as the .md, also, Skill can have multiple function definitions.
    • esafak4 hours ago
      It&#x27;s the catalog for the tools. Especially useful if you have custom tools; they expect the basics like grep and jq to be there.
    • jinushaun5 hours ago
      I agree. I don’t see how this is different from tool calling. We just put the tool instructions in a folder of markdown files.
      • yousif_1231234 hours ago
        It doesn&#x27;t need to be describing a function. It could be explaining the skill in any way, it&#x27;s kind of just like more instructions and metadata to be load just in time vs given all at once to the model.
  • mikaelaast8 hours ago
    Are we sure that unrestricted free-form Markdown content is the best configuration format for this kind of thing? I know there is a YAML frontmatter component to this, but doesn&#x27;t the free-form nature of the &quot;body&quot; part of these configuration files lead to an inevitably unverifiable process? I would like my agents to be inherently evaluable, and free-text instructions do not lend themselves easily to systematic evaluation.
    • coldtea7 hours ago
      &gt;<i>doesn&#x27;t the free-form nature of the &quot;body&quot; part of these configuration files lead to an inevitably unverifiable process?</i><p>The non-deterministic statistical nature of LLMs means it&#x27;s inherently an &quot;inevitably unverifiable process&quot; to begin with, even if you pass it some type-checked, linted, skills file or prompt format.<p>Besides, YAML or JSON or XML or free-form text, for the LLM it&#x27;s just tokens.<p>At best you could parse the more structured docs with external tools more easily, but that&#x27;s about it, not much difference when it comes to their LLM consumption.
    • Etheryte8 hours ago
      The modern state of the art is inherently not verifiable. Which way you give it input is really secondary to that fact. When you don&#x27;t see weights or know anything else about the system, any idea of verifiability is an illusion.
      • mikaelaast8 hours ago
        Sure. Verifiability is far-fetched. But say I want to produce a statistically significant evaluation result from this – essentially testing a piece of prose. How do I go about this, short of relying on a vague LLM-as-a-judge metric? What are the parameters?
        • visarga2 hours ago
          You 100% need to test work done by AI, if it&#x27;s code it needs to pass extensive tests, if it&#x27;s just a question answered, it needs to be the common conclusion of multiple independent agents. You can trust a single AI as much as a HN or reddit comment, but you can trust a committee of 4 as a real expert.<p>More generally I think testing AI by using its web search, code execution and ensembling is the missing ingredient to increased usage. We need to define the opposite of AI work - what validates it. This is hard, but once done you can trust the system and it becomes cheaper to change.
        • coldtea7 hours ago
          Would a structured skills file format help you evaluate the results more?
          • mikaelaast6 hours ago
            Yes. It would make it much easier to evaluate results if the input contents were parameterized and normalized to some agreed-upon structure.<p>Not to mention the advantages it would present for iteration and improvement.
            • coldtea3 hours ago
              &quot;if the input contents were parameterized and normalized to some agreed-upon structure&quot;<p>Just the format would be. There&#x27;s no rigid structure that gets any preferrential treatment by the LLM, even if it did accept. In the end it&#x27;s just instructions that are no different in any way from the prompt text.<p>And nothing stops you from making a &quot;parameterized and normalized to some agreed-upon structure&quot; and passing it directly to the LLM as skills content, or parsing it and dumping it as skills regular text content.
      • hu38 hours ago
        At least MCPs can be unit tested.<p>With Skills however, you just selectively append more text to prompt and pray.
    • heliumtera2 hours ago
      Then rename your markdown skill files to skills.md.yaml.<p>There you go, you&#x27;re welcome.
  • well_ackshually4 hours ago
    Ah, yes, simple text files that describe concepts, and that may contain references to other concepts, or references to dive in deeper. We could even call these something like a link. And they form a sort of... web, maybe ?<p>Close enough, welcome back index.htm, can&#x27;t wait to see the first ads being served in my skills
    • username2233 hours ago
      Imagine SUBPROGRAMs that implement well-specified sequences of operations in a COmmon Business-Oriented Language, which can CALL each other. We are truly sipping rocket fuel.
  • user393938249 minutes ago
    What they’re calling skills is a 5% weak implementation of what skills should be. My AI models fix this.
  • stared8 hours ago
    Yes! I was raving about Claude Skills a few days ago (vide <a href="https:&#x2F;&#x2F;quesma.com&#x2F;blog&#x2F;claude-skills-not-antigravity&#x2F;" rel="nofollow">https:&#x2F;&#x2F;quesma.com&#x2F;blog&#x2F;claude-skills-not-antigravity&#x2F;</a>), and excited they come to Codex as well!
    • derrida6 hours ago
      Thanks for that! You mentioned Antigravity seemed slow, I just started playing with it too (but not really given it a good go yet to really evaluate) but I had the model set to Gemini Flash, maybe you get a speed up if you do that?
      • stared6 hours ago
        My motivation was to use the smartest model available (overall, not only from Google) - I wanted to squeeze more out of Gemini 3 Pro that in Cursor. With new model releases usually there are things with outages. This are ever changing.<p>That said, for many tasks (summaries and data extraction) I do use Gemini 2.5 Flash, as it cheap and fast. So excited to try Gemini 3 Flash as well.
  • rdli8 hours ago
    This is great. At my startup, we have a mix of Codex&#x2F;CC users so having a common set of skills we can all use for building is exciting.<p>It’s also interesting to see how instead of a plan mode like CC, Codex is implementing planning as a skill.
    • greymalik7 hours ago
      I’m probably missing it, but I don’t see how you can share skills across agents, other than maybe symlinking .claude&#x2F;skills and .codex&#x2F;skills to the same place?
      • rdli7 hours ago
        Nothing super-fancy. We have a common GitHub repo in our org for skills, and everyone checks out the repo into their preferred setup locally.<p>(To clarify, I meant that some engineers mostly use CC while others mostly use Codex, as opposed to engineers using both at the same time.)
      • hugh-avherald5 hours ago
        Codex 5.2 automatically picked up my claude agents&#x27; skills. Didn&#x27;t prompt for it, it just so happened that what I asked it for, one of claude&#x27;s agents&#x27; prompts was useful, so Codex ran with it.
  • jonrosner8 hours ago
    one thing that I am missing from the specification is a way to inject specific variables into the skills. If I create let&#x27;s say a postgres-skill, then I can either (1) provide the password on every skill execution or (2) hardcode the password into my script. To make this really useful there needs to be some kind of secret storage that the agent can read&#x2F;write. This would also allow me as a programmer to sell the skills that I create more easily to customers.
    • j_bum6 hours ago
      I have no clue how you’re running your agents or what you’re building, but giving the raw password string to a the model seems dubious?<p>Otherwise, why not just keep the password in an .env file, and state “grab the password from the .env file” in your Postgres skill?
      • jonrosner6 hours ago
        I am thinking of distributing skills that I build to my clients. As my clients are mostly non-technical users I need this process of distribution to be as easy as possible. Even adding a .env file would probably be too much for most of them. With skills I can now finally distribute my logic easily, just send the raw files and tell them to put it into a folder - done. But there is no easy way for them to &quot;setup&quot; the credentials in those skills yet. The best UX in my opinion would be for Codex (or Claude, doesn&#x27;t matter) to ask for those setup-parameters once when first using the skill and process the inputs in a secure manner, i.e. some internal secret storage
    • bavell6 hours ago
      &gt; there needs to be some kind of secret storage that the agent can read&#x2F;write<p>Why not the filesystem?<p>I would create a local file (e.g. .env) in each project using postgres, then in my postgres skill, tell the agent to check that file for credentials.
  • not_a_toaster7 hours ago
    We’ve made a zero shot decision tree
  • summarity9 hours ago
    See also:<p>Anthropic: <a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;equipping-agents-for-the-real-world-with-agent-skills" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;equipping-agents-for-t...</a><p>Copilot: <a href="https:&#x2F;&#x2F;github.blog&#x2F;changelog&#x2F;2025-12-18-github-copilot-now-supports-agent-skills&#x2F;" rel="nofollow">https:&#x2F;&#x2F;github.blog&#x2F;changelog&#x2F;2025-12-18-github-copilot-now-...</a>
  • rochansinha12 hours ago
    Agent Skills let you extend Codex with task-specific capabilities. A skill packages instructions, resources, and optional scripts so Codex can perform a specific workflow reliably. You can share skills across teams or the community, and they build on the open Agent Skills standard.<p>Skills are available in both the Codex CLI and IDE extensions.
    • dan_wood10 hours ago
      Thanks to Anthropic.
  • alexgotoi7 hours ago
    At any HR conference you go, there are two overused words: AI and Skills.<p>As of this week, this also applies to Hacker News.
  • karolcodes8 hours ago
    anyone using this in agentic workflow already? how is it?
  • haffi1129 hours ago
    What are your favourite skills?
    • frankc5 hours ago
      The skills that matter most to me are the ones I create myself (with the skill creator skill) that are very specific and proprietary. For instance, a skill on how to write a service in my back-testing framework.<p>I do also like to make skills on things that are more niche tools, like marimo (a very nice jupyter replacement). The model probably does known some stuff about it, but not enough, and the agent could find enough online or in context7, but it will waste a lot of time and context in figuring it out every time. So instead I will have a deep thinking agent do all that research up front and build a skill for it, and I might customize it to be more specific to my environment, but it&#x27;s mostly the condensed research of the agent so that I don&#x27;t need to redo that every time.
    • dmd7 hours ago
      A very particular set of skills.
    • pylotlight8 hours ago
      nunchuck skills