Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What’s the cost to use this on a larger scale?


Everything in the notebook is open-source. It mainly uses the following projects:

https://github.com/neuml/txtai https://github.com/kuprel/min-dalle

txtai workflows can be containerized and run as a cloud serverless function - https://neuml.github.io/txtai/cloud/


I assume he meant the computational cost.


OK, I misunderstood there. Running the code, which generates 15 images, on a standard GPU Colab environment takes about 2 minutes. It may be possible to submit a single batch of text summaries to the DALL-E mini model, which would improve performance a good deal.


We looked into it. At current TPU/GPU prices it's about $.60 to $.30 an image.


Was this a cost just for the DALL-E piece or the full text extraction, summarization and image generation workflow?


DALL-E mini not DALL-E, just making sure everyone has the right expectations.


Just DALL-E


Thanks, good to know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: