Hacker Newsnew | past | comments | ask | show | jobs | submit | yoquan's commentslogin

Just to share a perspective. Doing business in Japan often requires a certain level of trust. Showing that you (the presenter) have something to count on is thus important.


I should have written that I hate such presentations when the history is up front. Tell me what you’re gonna do for me, tell me how so I believe you, tell me that you’ve been doing it a while for others so I believe you, then, if you want, tell me that you’ve been around for a while so I can depend on you.

But all that should be provided my my interests in mind, not your own ego stroking.

That said, different countries have different cultures in this regard. But peoples’ attention isn’t different and it’s a shame to waste the period of maximum attention on the thing that helps the least in sales.


The slides describing the "whys" of Mojo and its early vindication did a good job of clearing suspicion around its marketing hype, imo.


Because it's fun :-) And effort to bring up a could-be-commercial version is on going.

https://www.together.xyz/blog/redpajama


For a deeper look into stablecoin mechanism and performance of them during May 2022 event, I would recommend [1] which I think is much more interesting for curious readers.

[1] https://www.fdic.gov/analysis/cfr/bank-research-conference/a...


The modeling of convenience yield looks way off. Some actors get way more utility; there’s an impact cost to purchasing large amounts; bounding utility by r makes no sense in real-world markets.


The model used by this research is T5 which is open sourced already. So I think once the dataset is released, we'll see the open version of pre-trained model very soon.




Thanks, I tried it but it has the same problem as: https://news.ycombinator.com/item?id=32033480 and is actually a bit worse.


Does it actually work? I haven't managed to make it produce anything but empty pages.

Also, I didn't appreciate that it downloaded models behind my back.


Obviously it will relate to complexity in deep learning but I can't resist thinking about some AI models involving quantum computation stuffs :-)


Actually no. Each layer requires output from previous one, which means sequentially computation. While wider layers can utilize GPU parallel computation better. This is kind of trade-off between less memory (less parameters) vs longer time.


Hayabusa-2 is the first of its kind successful mission. And it got congrats from Elon Musk too! [1] Looking forward to new discoveries about early Solar System from the asteroid sample it brought back.

[1] https://twitter.com/haya2e_jaxa/status/1335513393122316288


To me learning cause-and-effect is a non trivial process, when recognizing relationship often comes first and intrigues "formal" reasoning process afterwards (if any).

The more formal in the later process, the closer we reach to the real "knowledge". So your suggestion is quite practical, in the sense that we can start from that and figure out how to push the engine toward the more formal spectrum.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: