When computers/internet first came about, there were (and still are!) people who would struggle with basic tasks. Without knowing the specific task you are trying to do, its hard to judge whether its a problem with the model or you.
I would also say that prompting isn't as simple as made out to be. It is a skill in itself and requires you to be a good communicator. In fact, I would say there is a reasonable chance that even if we end up with AGI level models, a good chunk of people will not be able to use it effectively because they can't communicate requirements clearly.
So it's a natural language interface, except it can only be useful if we stick to a subset of natural language. Then we're stuck trying to reverse engineer a non documented, non deterministic API. One that will keep changing under whatever you build that uses it. That is a pretty horrid value proposition.
Short of it being able to mind read, you need to communicate with it in some way. No different from the real world where you'll have a harder time getting things done if you don't know how to effectively communicate. I imagine for a lot of popular use-cases, we'll build a simpler UX for people to click and tap before it gets sent to a model.
When computers/internet first came about, there were (and still are!) people who would struggle with basic tasks. Without knowing the specific task you are trying to do, its hard to judge whether its a problem with the model or you.
I would also say that prompting isn't as simple as made out to be. It is a skill in itself and requires you to be a good communicator. In fact, I would say there is a reasonable chance that even if we end up with AGI level models, a good chunk of people will not be able to use it effectively because they can't communicate requirements clearly.