Hacker Newsnew | past | comments | ask | show | jobs | submit | more ankrgyl's commentslogin

It's not open source at the moment, but we're definitely open to it.


totally pointless if its not open source


Thanks for all the support Bryan and team!


Who do you think is the sweet spot user for Rivet? Anyone building any LLM app, or a certain kind (e.g. agents)? Is there a use case for which you'd advise against using Rivet?


I think it's people building tool-using agent applications.

We've been collaborating with several amazing teams over the past few months, who have been pushing Rivet in various ways. We used it for a chat interface at Ironclad, but we've seen companies like Bento and Willow integrate it with different UX paradigms.

The commonality seems to be that we are all integrating LLMs into an application, and want the LLM to somehow interact with that application (set up search filters, build a guide based on documentation).


I'm not super familiar with Dolt's SQL implementation, but I'm surprised that a simple `count(*)` query timed out:

https://www.dolthub.com/repositories/dolthub/transparency-in...


Queries on DoltHub need to go to S3 to fetch all the chunks. This only works for databases < 1GB generally. You will get much better performance if you clone the database locally.


Great to see popular SaaS use cases commoditized and opened up like this. After product categories start to fossilize into a standard UX + features, the benefits of using an expensive vendor start to decrease.


exactly. docsend is a great product, but once my free trial ended i didn’t want to pay $20/mo just to send documents. maybe i’m cheap ¯\_(ツ)_/¯


Will you ever consider using EBS volumes or another mechanism which supports random writes?


Why use Kubernetes if you’re in control of the cloud environment? What does it bring to the table? Why not firecracker?


In a lot of ways, having this be based on k8s provides a lot of flexibility and independence, and with k8s there's much less friction to providing computes with high locality relative to applications/users application code.

It's also the case that by staying with k8s we can take advantage of existing operational tooling, experience, and work, and can focus or development time on the important parts of this problem: runtime scaling, scheduling, and virtual machine management and not on cloud provider APIs and management.

In short, k8s gives us options that we like for the future, it's shortening the development cycle, and only getting in our way a below-average amount. At the same time--for the most part--we're building this with reasonable abstractions that would let us reuse our existing work if k8s becomes more trouble than it's worth.


Firecracker doesn’t support live migrations. There is a new project called cloud hypervisor and it showed a lot of promise, but we struggled to make it works and reverted to QEMU

As for k8s its an ongoing debate internally if the complexity worth the benefit. It helps us provision nodes but we have to fight it quite a bit too. It’s unclear we will keep it long term


I love DuckDB and am cheering for MotherDuck, but I think bragging about how fast you can query small data is really no different than bragging about big data. In reality, big data's success is not about data volume. It's about enabling people to effectively collaborate on data and share a single source of truth.

I don't know much about MotherDuck's plans, but I hope they're focused on making it as easy to collaborate on "small data" as Snowflake/etc. have made it to collaborate on "big data".


It’d be great to show the debugging experience in the video (in fact, I’d prefer seeing that over the breadth of features). E.g. what happens when there’s a syntax error in my sql query or the python code fails on an invalid input?

That tends to be the critical make it or break it feature when you’re writing code in an app builder.


Agree, debugging is a critical user experience! In Patterns, you'll see the full stack trace and all logs when you execute Python or SQL.


The Rust vs. Go comparison has two key differences:

- The Rust example uses 8 bit unsigned ints vs. Go example uses 32 bit signed ints

- Rust's sort is stable by default whereas Go's is not.

If you tweak the Rust benchmark to use `i32` instead of `u8` and `sort_unstable` instead of `sort`, you should see ~3-4x faster performance.


Made a PR with the fixes, Rust is now 3 times faster than tinygo, and the wasm is almost 3 times smaller (wasm+js is twice as small) as expected.

https://github.com/Ecostack/wasm-rust-go-asc/pull/1

My first foray into wasm, so I probably missed some optimizations like wasm-opt.


Also I would assume different languages have different random() implementations which could contribute to the run time. So to make tests equal, you should not measure time to set up the array.


The Go version should also use `sort.Ints`. https://pkg.go.dev/sort#Ints


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: