Hacker Newsnew | past | comments | ask | show | jobs | submit | coryvirok's commentslogin

The hack for sveltekit specifically, is to first have Claude translate the existing code into a next.js route with react components. Run it, debug and tweak it. Then have Claude translate the next.js and react components into sveltekit/svelte. Try and keep it in a single file for as long as possible and only split it out once it's working.

I've had very good results with Claude Code using this workflow.


Not sure why there is so much hate on this thread. I found the post well written, insightful, and pragmatic.

Having built systems that process billions of events and displayed results, triggered notifications, etc in real time (not RTOS level, I'm talking 1 or 2 seconds of latency) you absolutely need to separate reads and writes. And if you can trust db replication to be fast and reliable, you can indeed skip distributed locks and stay on the right side of the CAP theorem.

Event sourcing is how every write ahead log works. Which powers basically every db.

Is the concern on this thread that they preoptimized? I thought they walked through their decision making process pretty clearly.


I suspect there is a bit of knee-jerk because so often this pattern is misapplied. I actually quite like the example in the article although I'm basically allergic to CQRS in general.

I think your point about write-ahead logging etc is a good one. If you need a decent transactional system, you're probably using a system with some kind of WAL. If you're event sourcing and putting events into something which already implements a WAL, you need to give your head a wobble - why is the same thing being implemented twice? There can be great reasons, but I've seen (a few times) people using a perfectly fine transactional DB of some kind to implement an event store, effectively throwing away all the guarantees of the system underneath.


For sure. Event logs in a transactional dbs are weird. I was surprised that they weren't using something like kafka for this.


> Not sure why there is so much hate on this thread.

1) "Kafka is resume-driven-development" is a meme.

2) Devs are in denial about being in a distributed system, and think that single-threaded thinking (in proximity to a DB that calls itself ACID) leads to correct results in a distributed setting.


This is really good! It would be really cool to somehow get human designs in the mix to see how the models compare. I bet there are curated design datasets with descriptions that you could pass to each of the models and then run voting as a "bonus" question (comparing the human and AI generated versions) after the normal genAI voting round.


wow this is a super interesting idea, and the team loves it — we'll fast follow-through and follow-up here when we add it, thanks for the suggestion!


This would be extra interesting for unique designs - something more experimental, new. As as for now even when you ask AI to break all rules it still outputs standard BS.


yes! sorry to ping this post late, but we're experimenting with something now - excited to update :)


Shameless plug for https://rollbar.com

Good luck out there!


Apparently an unpopular opinion, but actually strong types are useful above and beyond editor linting errors.

- Jit optimizations - Less error checking code paths leading to smaller footprints - Smaller footprints leading to smaller vulnerability surface area - less useful: refactorability

Don't get me wrong, I love the flexibility of JavaScript. But you shouldn't rely on it to handle your poorly written code.


How do you get JIT optimizations from TypeScript types? Don't the types get stripped before you run it?


This has led me to wonder if this is the last generation of "senior" devs. The thinking goes, if it takes a couple of years to educate and train a junior dev on average and LLMs can increasingly replace junior devs... There is no need for a company to hire junior devs, starving the ecosystem of talent that would have otherwise gone onto becoming senior.

In a world where the average work is the first to be displaced (due to training data availability), the last to be replaced are the ones furthest from distribution mean...



Why did you decide to rewrite this in react vs svelte?

As someone who loves svelte and whose been writing a fairly large svelte app but has been jealous of the react ecosystem, I'd love to hear your rationale.


It pained me to do this. Everyone I talked to used React instead of Svelte. At some point, we realized we were spending half the time supporting Svelte just for ourselves for <10% of the return.

I still love Svelte and will continue using it for side projects but right now I think the tech needs to get out of the way of the better user experience.


Ya, that seems about right for my project as well. Although mine won't be OSS so I'm more concerned with finding good ppl with Svelte skills vs React skills. I may need to bite the bullet once I'm closer to hiring. Thanks.


No problem. This is very poorly documented but this entire repo is built in Svelte. I'm porting over functionalities from it so this is a much more mature editor. If there's a specific implementation you want to learn about, let me know :)

I'm always happy to talk about Svelte! Though, I'm starting to enjoy the flexibility of jsx now that I'm back to using React.

https://github.com/onlook-dev/monorepo


Hey! Cory, (Co-Founder/CTO) from Rollbar here.

Just wanted to say I think this is awesome and I love what you're doing here! :D

And to the author, I'd love to chat - drop me a line at cory@rollbar.com


Also, check out https://rollbar.com for free error monitoring.

(Ps. I'm one of the founders :)


Cool! I'll add it to my growing list :) Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: