Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In 2016 & 2017 my team at Capital One built several >1B parameter models combining LSTMs with a few other tricks.

We were able to build generators that could replicate any dataset they were trained on, and would produce unique deviations, but match the statistical underpinnings of the original datasets.

https://medium.com/capital-one-tech/why-you-dont-necessarily...

We built several text generators for bots that similarly had very good results. The introduction of the transformer improved the speed and reduced the training / data requirements, but honestly the accuracy changed minimal.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: