Actually very similar, GitHub was also early in talking about Continuous AI. I think the biggest difference is that Continue is very focused on the use case of checks and gets a lot of the small details right that really matter because it's not built for generic non-AI CI workflows
Some of these are:
- Having a local experience to run them with Claude Code, etc.
- Making it easy to accept/reject suggested changes
- A single folder dedicated to just checks so you don't have to think about triggers
- Built-in feedback loops so you can tune your checks with feedback
- Metrics so you can easily track which have a high suggestion / merge rate
I think this is a good summary! And the configurable part turns out to be the main bit
One of the fundamental differences between checks and code review bots is that you trade breadth for consistency. There are two things Continue should never, ever do:
1. find a surprise bug or offer an unsolicited opinion
2. fail to catch a commit that doesn't meet your specific standards
- we do! right now you can export some metrics as images, or share a link publicly to the broader dashboard. will be curious if others are interested in other formats https://imgur.com/a/7sgd81r
- a _lot_ of people still use the VS Code extension and so we're still putting energy toward keeping it polished (this becomes easier with checks : ))
- our checks product is powered by an open-source CLI (we think this is important), which we recommend for jetbrains users
- the general goal is the same: we start by building tools for ourselves, share them with people in a way that avoids creating walled gardens, and aim to amplify developers (https://amplified.dev)
Continue is building an open-source AI code assistant along with a hub of models, rules, and tools. Our VS Code and JetBrains extensions are primarily written in TypeScript, but we write small amounts of Kotlin (for JetBrains), Rust (for performant codebase indexing), and Go (for easy distribution of a CLI).
We're hiring for junior, mid-level, and senior full-stack software engineers.
Continue is building an open-source AI code assistant along with a hub of models, rules, and tools. Our VS Code and JetBrains extensions are primarily written in TypeScript, but we write small amounts of Kotlin (for JetBrains), Rust (for performant codebase indexing), and Go (for easy distribution of a CLI).
We're hiring for junior, mid-level, and senior full-stack software engineers.
If you're a user of .cursorrules or .windsurfrules, then there's a lot of great content at https://hub.continue.dev/explore/rules. They don't yet support assistant files directly so unfortunately you'd have to copy-paste, but we designed the standard to be client-agnostic and I think it will be a great way of sharing an ecosystem across clients going forward
We want Continue to always be free for individuals, and we layer on features for teams and enterprises to make money. There’s much more detail on our pricing page: https://hub.continue.dev/pricing
We have opt-out telemetry that is described clearly here (https://docs.continue.dev/telemetry), and is very easy to toggle off. And we’re actually much more interested in helping folks collect their own data. In the hub, there are “data” blocks that let you send development data to a destination of your choice so you can better understand your own use of AI for development and eventually fine-tune models: https://docs.continue.dev/customize/deep-dives/development-d...
I'm sorry you ran into problems repeatedly, and couldn't agree more. As much as we've aimed to innovate on customization, it does not have to be an either/or. We recognized this, heard a lot from the community, and the goal of our 1.0 was to focus ourselves very seriously on stability. Going forward we are going to continue to treat Continue as the foundational tool that we hope it can be. This has meant investment in testing, clearer contributing guidelines, and a general mindset shift to understand that AI code assistants have become fundamental to developer workflows and have to be rock solid (when something is doing work on every keystroke, it better not break!)
If you get the chance to try the 1.0, I'd love to hear whether you find it better, or if you think we can do even better—we think it's in a solid place and it'll always be improving from here.
Ooh +1 to both of these. We use shadcn as well :) and have been leveraging these docs: https://hub.continue.dev/vercel/shadcn-ui-docs, but there should totally be more in-depth rules for it and Firestore
I think it would be great to have one on the front page as well if there's something you find works really well. The nice thing about assistants is that a lot of the work is just browsing and clicking to add the blocks you want, and then when you really end up going deeper to customize the rules, this is just writing plain english!
I don't believe Cursor has added support for assistant files (https://docs.continue.dev/reference) yet, but think it would be great if they did! We've designed the format to be totally client agnostic and hopefully to be a simple, portable way to share assistants across any tool
Some of these are:
- Having a local experience to run them with Claude Code, etc.
- Making it easy to accept/reject suggested changes
- A single folder dedicated to just checks so you don't have to think about triggers
- Built-in feedback loops so you can tune your checks with feedback
- Metrics so you can easily track which have a high suggestion / merge rate
Are you using a lot of `gh-aw`?
reply