Hacker Newsnew | past | comments | ask | show | jobs | submit | vidarh's commentslogin

I'm assuming the intent was to as if the *.prompt files could have a shebang line.

   #!/bin/env runprompt
   ---
   .frontmatter...
   ---
   
   The prompt.
Would be a lot nicer, as then you can just +x the prompt file itself.

There seems to be an awful many "could" and "might" in that part. Given how awfully limited the Gemini integration inside Google Docs is, it's an area that's just made me feel Google is executing really slowly on this.

I've built a document editor that has AI properly integrated - provides feedback in "Track Changes" mode and actually gives good writing advice. If you've been looking for something like this - https://owleditor.com

It looks nice, but for my use it's very specifically not reviews I want in AI integration with an editor, but to be able to prompt it to write or rewrite large sections, or repeated references to specific things, with minimal additional input. I specifically don't want to go through an approve edit by edit - I'll read through a diff and approve all at once or just tell it how to fix its edits.

Claude at least is more than good enough to do this for dry technical writing (I've not tried it for anything more creative), and so I usually end up using Claude Code to do this with markdown files.


Proxima flares and bathes Proxima Centauri b in radiation when it does, so it seems unlikely to be particularly habitable. But it's still tantalising...

Yeah, I spend most of my days keeping up with current AI development these days, and I'm only scratching the surface of how to integrate it in my own business. For people for whom it's not their actual job, it will take a lot more time to figure out even which questions to ask about where it makes sense to integrate in their workflows.

The thing is, MCP is little more than another self-descriping API format, and current models can handle most semi-regular API's with just a description and basic tooling. I had Claude interact with my app server via Curl before I decided to just tell it to write an API client instead. I could have told it to implement MCP instead, but now I have a CLI client that I can use as well, and Claude happily uses it with just the --help options.

If you don't already have an API, sure, MCP is a possible choice for that API. But if you have an API, there is decreasing reasons to bother implementing an MPC server the smarter the models are getting vs. just giving it access to your API docs.


Skills are just markdown files in a folder that any agent that can read files can figure out.

Just tell your non-Claude agent to read your skills directory, and extract the preambles.


There's no lock-in there.

Tell your agent of choice to read the preamble of all the documents in the skills directory, and tell it that when it has a task that matches one of the preambles, it should read the rest of the relevant file for full instructions.

There are far fewer dependencies for skills than for MCP. Even a model that knows nothing about tool use beyond how to run a shell command, and has no support for anything else can figure out skills.

I don't know what you mean regarding explicitly referencing other skills - Claude at least is smart enough that if you reference a skill that isn't even properly registered, it will often start using grep and find to hunt for it to figure out what you meant. I've seen this happen regularly while developing a plugin and having errors in my setup.


> There are far fewer dependencies for skills than for MCP.

This is wrong and an example magical thinking. AI obviously does not mean that you can ship/use software without addressing dependencies? See for example https://github.com/anthropics/skills/blob/main/slack-gif-cre... or worse, the many other skills that just punt on this and assume CLI tools and libraries are already available


It is categorically not wrong. With an MCP you have at a minimum all the same dependencies and on top of that a dependency on your agent supporting MCP. With skills, a lot of the time you don't need to ship code at all - just an explanation to the agent of how to use standard tools to access an API for example, but when you do need to ship code, you don't need to ship any more code than with an MCP.

The trivial evidence of this, is that if you have an MCP server available, the skill can simply explain to the agent how to use the MCP server, and so even the absolute worst case for skills is parity.


This is no different to an MCP, where you rely on the model to use the metadata provided to pick the right tool, and understand how to use it.

Like with MCP, you can provide a deterministic, known-good piece of code to carry out the operation once the LLM decides to use it.

But a skill can evolve from pure Markdown via inlining some shell commands, up to a large application. And if you let it, with Skills the LLM can also inspect the tool, and modify it if it will help you.

All the Skills I use now have evolved bit by bit as I've run into new use-cases and told Claude Code to update the script the skills references or the SKILL.md itself. I can evolve the tooling while I'm using it.


It's more that they are embracing that the LLM is smart enough that you don't need to build-in this functionality beyond that very minimal part.

A fun thing: Claude Code will sometimes fail to find the skill the "proper" way, and will then in fact sometimes look for the SKILL.md file with tools, and read the file with tools, showing that it's perfectly capable of doing all the steps.

You could probably "fake" skills pretty well with instructions in CLAUDE.md to use a suitable command to extract the preamble of files in a given directory, and tell it to use that to decide when to read the rest.

It's the fact that it's such a thin layer that is exciting - it means we need increasingly less special logic other than relying on just basic instructions to the model itself.


Skills are less exciting because they're effectively documentation that's selectively loaded.

They are a bigger deal in a sense because they remove the need for all the scaffolding MCPs require.

E.g. I needed Claude to work on transcripts from my Fathom account, so I just had it write a CLI script to download them, and then I had it write a SKILL.md, and didn't have to care about wrapping it up into an MCP.

At a client, I needed a way to test their APIs, so I just told Claude Code to pull out the client code from one of their projects and turn it into a CLI, and then write a SKILL.md. And again, no need to care about wrapping it up into an MCP.

But this seems a lot less remarkable, and there's a lot less room to build big complicated projects and tooling around it, and so, sure, people will talk about it less.


Skills are good for context management as everything that happens while executing the skill remains “invisible” to the parent context, but they do inherit the parent context. So it’s pretty effective for a certain set of problems.

MCP is completely different, I don’t understand why people keep comparing the two. A skill cannot connect to your Slack server.

Skills are more similar to sub-agents, the main difference being context inheritance. Sub-agents enable you to set a different system prompt for those which is super useful.


A skill can absolutely connect to your slack server. Either by describing how to use standard tools to do so, or by including code.

Most of my skills connect to APIs.


Are you sure, i thought skill were loaded into the main context, unlike (sub)agents. According to Claude they're loaded into the main context. Do you have link?

No, just their header / when they should be invoked, the actual contents of the skill is never loaded in the main context.

Unless claude decides a skill is needed, then it loads the additional details into the main context to use. It's basically lazy loading into main context.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: