Author here. tldraw is not an open source project. It used to be but we switched to a commercial license for our v2 in 2023. The old v1 is still available under MIT; and I try to put as much as we can under MIT license where it makes sense, but the core needs to be licensed in a way that allows us to sell it. We've always been majority source available and still accept community contributions, even though the default is to close external PRs. I care about OSS. I wish the economics of it made sense for us.
> Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
Dude, you just blew my mind. Your code is under a commercial license?
I'm not sure you realize this, but that nullifies your blog post's purpose of existence.
Why would any reader care about your AI contribution policies if your code isn't even open source?
Be honest with me: would you contribute PRs to Microsoft Office or Adobe Photoshop? No, none of us would do that, because if we buy licenses to those products we are paying customers of proprietary software. It's not our job to fix bugs and add features for Microsoft or Adobe, we paid them to do that for us so that we can focus on our products’ business value, and in turn they keep the code proprietary for themselves.
To put it as bluntly as possible, you're a freeloader. You not only accept contributions, but you keep them as proprietary code, you micromanage contributors, and you stoop even lower by writing a blog post complaining about the quality of those contributions.
I feel bad for your customers that are making PRs to your codebase. Here they are doing free work for your business and they don't even get to keep community ownership of that code. Hell, at least when I contribute to VSCode, we get to fork the hell out of that codebase as it's MIT licensed. You just keep it!!
> I care about OSS.
Doubt! Your actions don't show it! Words are meaningless in comparison.
My advice to you, if you really want it? Cut the bullshit, don't even make the source available. Be honest about what you are: a commercial proprietary software company.
(author here) Code quality matters a lot—we make an SDK, we sell our code. I'm writing my own code with the assistance of AI tools. When I'm asking an agent to step in, I can recognize when the tools are producing poor results, I can pause the work, correct it, or take the wheel in order to get things going in the right direction. Programming is still important, quality and care perhaps more important than ever, given how much of it we can do when tool-assisted.
(author here) To be fair, we also were getting plenty of poor PRs that implemented well-described issues. Or hey, maybe they were poor and maybe they weren't, but they were someone else's "claude please fix" and I don't think it's important for me to review them.
But you're right about the todos... except that the majority of times my little /issue command actually produces really great issues and digs up root causes very well. I still need to read and bless them though. Maybe we need a "potentially slop" label.
Hey, OP here. Shitty issue command is fine in a world where I have time to review it and expand it, just like I would have before; except now, there's a 80% chance that actually the AI got it right and I have a well-researched issue that can be immediately worked on. The only problem is if someone without any content comes by and feeds it into cursor to make a PR.
Maybe it would have been better to keep the original no-body "fix truncate in sidebar text" style issues, though I quite like my little /issue command. I'm sure I'll have something else in a month.
Reviewing code is much less of a burden if I can trust the author to also be invested in the output and have all the context they need to make it correct. That's true for my team / tldraw's core contributors but not for external contributors or drive-by accounts. This is nothing new and has up to now been worth the hassle for the benefits of contribution: new perspectives, other motivations, relationships with new programmers. It's just the scale of the problem and the risk that the repo gets overwhelmed by "claude fix this issue that I haven't even read" PRs.
Hey, Steve from tldraw here. We use AI tools to develop tldraw. The tools are not the problem, they're just changing the fundamentals (e.g. a well-formed PR is no longer a sign of thoughtful engagement, a large PR shows more effort than a small PR, etc) and accelerating other latent issues in contribution.
About the README etc: we ship an SDK and a lot of people use our source code as docs or a prototyping environment. I think a lot about agents as consumers of the codebase and I want help them navigate the monorepo quickly. That said, I'm not sure if the CONTEXT.md system I made for tldraw is actually that useful... new models are good at finding their way around and I also worry that we don't update them enough. I've found that bad directions are worse than no directions over time.
This is my experience as well. I work with AI agents a lot, they are very useful. What's not useful is some passer-by telling the AI "implement <my favorite feature>" and then sending that as a PR. I could have written a sentence to the LLM too if I wanted to, you aren't really giving me or the project any value by doing that.
Now that writing the code is the easy part, we're just going to transition to having very few contributors, who are needed for their architectural skills, product vision, reasoned thinking, etc, rather than pure code-writing.
Every inside contributor (besides the original author) started as an outside contributor. If the solution to the problem of LLMs is a blanket ban on outside contributors, I fear for the future of open source.
I disagree, when code took hours to write, it was very useful to have someone drive by and fix a bug for you. Now, all that does is it saves you five minutes of LLM crunching.
Want to email me steve@tldraw.com? We're looking at an issue with new accounts today on tldraw.com but it sounds different from what you're describing.
I think we’ll do extended trials for small teams if they’re pre-revenue / pre-funding, and I can imagine setting up some relationships like that with incubators etc. A few other posters have asked the same question and it’s a good one, thanks.
> Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?
god help us
reply