Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I asked ChatGPT: "The OpenAI team released the DALL-E model architecture and training details, along with a large dataset of images and their corresponding captions, which allowed the open-source community to replicate and improve upon the model. In contrast, the GPT-3 model is much more complex and the training data is not publicly available, which makes it difficult for the open-source community to replicate or surpass the model. Additionally, the GPT-3 model is significantly larger than DALL-E, with 175 billion parameters, which makes it much more computationally expensive to train and fine-tune."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: