Depends on the purpose. I don't think the various parameters (like temperature, top_p) are fully known when it comes to ChatGPT, and neither is the "system prompt" they're using. With the API, you have full control and visibility of those.
If you really want to compare "performance"/"quality", you'd have to do so via the API, using known and static parameters and locking the model version. None of which is available via ChatGPT.
If you really want to compare "performance"/"quality", you'd have to do so via the API, using known and static parameters and locking the model version. None of which is available via ChatGPT.