That's a great discussion of autorouting. Then he ends with: "key piece to enable the “vibe-building” of electronics." Ouch
Routing itself is easy. It's when the router has to tear up stuff it already routed to fit new stuff in that things get complicated and the combinatorics start to get you.
I miss the autorouter KiCAD used to have. It was taken out for iffy IP reasons (the author had worked for an autorouting company). Reaction to users who wanted it back were along the lines of "Real Men Don't Use Autorouters".[1]
Haha I feel like the right reaction to "vibe-*" is to cringe. I cringe a little bit every time I see someone promoting a vibe-coded app at the moment, but I think back to how I got started in coding (I was constantly annoying people on old ActionScript forums to fix my code) and I see so much potential in people being able to get started quickly in any domain. I hope that our autorouter (and others that follow!) will similarly allow people to ship their first electronics without needing tons of guidance or formal education.
That said a good autorouter should also be useful to professionals! So hopefully we help with that as well!
I wish these folks well and hope that their autorouter gets folded into KiCad.
However, as one of the cranky old people who don't really want to see KiCad expend any energy on autorouters, PCB autorouters are a pain in the ass that never work.
We can look at VLSI autorouters to determine why that is. VLSI autorouters also were a pain in the ass that never worked. But what happened was that VLSI suddenly got lots of layers and you could dedicate a layer to routing vertically, a layer to routing horizontally, a layer to routing power and still have a couple of layers for global vertical interconnect, global horizontal interconnect, and global power.
The fundamental problem with PCB autorouting is that PCBs have WAY more obstructions than VLSI chips do. First, components themselves are obstructions and choke points. Second, PCB vias almost always obstruct all the layers of the board while VLSI vias only obstruct the two layers being connected. Third, PCB vias tend to be bigger than your interconnect metal width. Fourth, the number of PCB layers in use is way smaller than the number of layers in VLSI--the most common numbers of layers are 4 layers (most projects--of which only 2 are really used for general routing), 2 layers (because of cost engineering--good luck autorouting these) and 6 (a tiny minority).
It all adds up to PCB autorouting being a significantly more complicated job than VLSI autorouting.
I don't think that's true. Perhaps by number of PCBs made 2 and 4 layers dominate: all those IoT doohickeys and alarm clock displays. And even single layer phenolic boards. And for most hobbyist work with little MCUs, 4 layers is a sweet spot. But they're usually either very simple devices where routing is not the problem, or they have very tight application constraints where the placement has to be squeezed and squeezed.
But much of the effort in designing PCBs in industry is on 6+ layers. You can probably smash out a simple smart lightswitch PCB in a day. Boards with BGA SoCs, DDR, PCIe and FPGAs can take whole teams months or more and have incredibly complicated constraints, many of which are implicit (the very simplest: put cap near pin, test points inline, make diff pair via symmetrical, keep this SMPS far from sensitive things, and make the inductor loop as tiny as you can). There are a million ways to make a PCB pass DRC and still be a completely non-functional device. In particular, routing is secondary in terms of effort and importance to placement.
If you sample what a random PCB engineer is working on, it's quite likely to be lots of layers, or an extremely tight layout, and probably both. Or something weird and application-dependent like high voltage. And they're probably fiddling with placement at the random sample time you choose.
Toy examples of sparse layouts like mechanical keyboards and DIP ICs are very unrepresentative of where the real effort, and money, goes.
KiCAD must be moving up in the world if people are using it for 6-layer boards. Or Altium, at US$5,500/year, is now too expensive even for pros.
I'd thought of KiCAD as a hobbyist tool. It didn't have the intensive library support of a pro tool. Footprints and 3D models were user submissions and not well curated or tested.
Footprints might need some manual tweaking.
With a pro tool, you're paying for a small army of people doing data entry on part info. Has KiCAD improved in that area?
> KiCAD must be moving up in the world if people are using it for 6-layer boards.
With every release of KiCad, the difference between KiCad and Altium gets smaller and smaller.
> With a pro tool, you're paying for a small army of people doing data entry on part info.
That has never been my experience using any PCB tool (even the truly big boys like Expedition or Allegro). Any libraries managed by an entity external to the people designing PCB boards have always been a disaster. If you haven't personally put something on a board, it is, practically by definition, broken.
Anyone using a "pro" tool (Expedition or Allegro) has their own army of people managing their library.
I could rant yet again about how Altium screwed themselves by encrypting libraries so that you had to subscribe to them, but I'm tired of that rant.
In Altium's stead, KiCad has been eating up the bottom end more and more. There are still some missing features (copper on inner footprint layers, flex board stackups), but they get fewer and fewer with each release. And Autodesk's moves with Eagle have hastened even more people onto KiCad.
People 100% use KiCad for 6+ layers. My most recent design has 6 and KiCad didn't break a sweat. There's a 12-layer CERN design that ships with KiCad as a demo, even.
No tool has libraries that never need tweaking for a design. 99% of KiCad library parts will never need touching in a usual "simple" design, but no one library part can cover all eventualities. Any tool that promises a library that doesn't ever need to be touched is lying or naive. You should also always check footprints wherever they come from, even if you paid for them.
KiCad has a new library team, and the parts in the library are revolutionarily better then they were say, 5 years ago.
Author here. Lots of great points being made. I want to throw in a crazy prediction.
I think routing is basically an image transformer problem without a good dataset right now. If the eye of the giant AI companies turned to autorouting and large synthetic but physically simulated circuit datasets were created, we would basically be done and autorouting would be a solved problem.
This means that all the work I’m doing now on heuristic algorithms, as well as all the hard work done by humans, will probably not be needed in the future. I just don’t see autorouting as being more difficult (in terms of solution space size) than the art being produced by transformer models right now.
I’m saying this because you’re right, these heuristic algorithms can only get us so far- the problem is really difficult. But our human intuition, the magic black box operation we do, doesn’t seem to be too far beyond the emerging transformer image models.
The major difference is in PCB, every single track has to abide by the rules, no exceptions are allowed if you want your board to work.
While AI-generated art is full of small defects which people are just ignoring, who cares about non-natural shadows or unrealistically large fingers.
It is possible to iteratively combine AI with DRC checker and loop until all is good, but it's not obvious to me that this would be performant enough, or if this system will stay in some sort of local minimum forever once the circuit is complex enough.
The same way claude never outputs code that has a syntax error, the image transformers will output DRC compliant “images”!
I think spatial partitioning can help solve issues with minor DRC violations as well- it should be easier to correct an image than to generate one from scratch. But I’m not even sure it’ll be necessary because of how coherent image models already are.
Claude doesn't usually produce code that actually works though. Passing DRC is one thing (no syntax errors). Actually works is another (compiles and executes with the desired effect as a complete application).
And you don't even get to use unit tests to check correctness.
You're suggesting the robots can learn the routing algorithms and rules just by looking at a bunch of pictures?
Sure, maybe, given a super-massive amount of data.
I see it as the difference between "I want a photo-realistic portrait of a beagle" and "I want a photo-realistic portrait of my neighbor's beagle, Bob". The first one there's some general rules as to what makes something a 'beagle' so is not too hard while the second has specific constraints which can't be solved without a bunch of pictures of Bob.
To address the specific issue, an AI would have to learn the laws of physics (aka, "Bobness") from a bunch of pictures of, essentially, beagles in order to undertake the task at hand.
I think maybe the best way to get this data set is to subsidize a few dozen electronics recycling centers for every unique microCT scan they send you. Lease them the tomography equipment. They increase their bottom line, you get a huge dataset of good-to-excellent quality commercial PCB designs.
Very fun idea, I had not considered training on existing work (IP is so sensitive I just couldn't think of a way to get enough)
My approach is slightly different for building the dataset. I think we should bootstrap an absolutely massive synthetic dataset full of heuristically autorouted PCBs to allow the AI to learn the visual token system and basic DRC compliance. We then use RL to reward improvements on the existing designs. Over time the datasets will get better similar to how synthetic datasets are produced whenever a new LLM model is released that make training subsequent LLMs easier.
I think people are underestimating the number of PCBs that are needed to train a system like this. My guess is it is well over 10m PCBs with perfect fidelity. It will make sense to have a large synthetic data strategy.
Before you splurge on hardware to extract data it would be much cheaper and faster to just buy it in Shenzhen. All the Apple stuff has been reverse engineered, this is how apps like ZXW have scans of all pcb layers. Random google search https://www.diyfixtool.com/blogs/news/iphone-circuit-diagram...
Routing itself is easy. It's when the router has to tear up stuff it already routed to fit new stuff in that things get complicated and the combinatorics start to get you.
I miss the autorouter KiCAD used to have. It was taken out for iffy IP reasons (the author had worked for an autorouting company). Reaction to users who wanted it back were along the lines of "Real Men Don't Use Autorouters".[1]
[1] https://forum.kicad.info/t/autorouting-and-autoplacement/185...