Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Demonstrating you can do it yourself shows a level of investment and commitment to AI in your platform that integrating LLAMA does not.

And from a corporate perspective, it means that you have in-house capability to work at the cutting-edge of AI to be prepared for whatever comes next.



> Demonstrating you can do it yourself shows a level of investment and commitment to AI in your platform that integrating LLAMA does not.

I buy this argument. It looks that's not what AWS does, though, yet they don't have problem attracting LLM users. Maybe AWS already got enough reputation?


It's easier because 70% of the market already has an AWS account and a sizeable budget allocated to it. The technical team is literally one click away from any AWS service.


I may be misunderstanding, but doesn't Amazon have it's own models in the form of Amazon Titan[0]? I know they aren't competitive in terms of output quality but surely in terms of cost there can be some use cases for them.

[0] https://aws.amazon.com/bedrock/titan/




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: