I think you're overly enthusiastic about what's going on here (which is surprising because you've seen the trend in AI seems to be re-inventing the wheel every other year...)
MCP was conceptually quite complicated, and a pretty big lift in terms of implementation for both servers and clients.
Skills are conceptially trivial, and implementing them is easy... provided you have a full Linux-style sandbox environment up and running already. That's a big dependency but it's also an astonishingly powerful way to use LLMs based on my past 6 months of exploration.
I remain afraid of prompt injection. If I'm telling Claude Code to retrieve data from issues in public repos there's a risk someone might have left a comment that causes it to steal API keys or delete files or similar.
I'm also worried about Claude Code making a mistake and doing something like deleting stuff that I didn't want deleted from folders outside of my direct project.