I'm always curious about people entering the space though. It is a small market, IMO. The tools are fractured, the existing players are lumbering behemoths, and the users are cranky zealots (you will have to pry KiCad out of my cold dead hands). I noted the step about the AR being written in JavaScript, which I have no real opinion on, but I'm curious about plans to plug into ecosystems (CAD vendors, OS tools) or try and draw people into another new ecosystem.
Javascript is fairly portable now with many runtimes (some even quite small such as QuickJS or Proffor), so I anticipate people will be able to run locally and build their own ginormous cache :)
I think everyone should be concerned about lockin and fragmenting ecosystems in EDA, but tscircuit and our autorouter are MIT-permissive technology (very rare in EDA) so we play nice (interoperable) with everyone.
I wonder why there's no standard API for autorouting so that a particular EDA must be supported. I'd love to see autorouter as a standardized HTTP endpoint so that I can run one locally or even buy Autorouter-As-A-Service. E.g. I'd happily pay for TopoR¹ as a service for some projects while others are fine with classic rouing.
It's nice to see anyone at all working on PCB autorouters, which have been pretty stagnant since Cadence bought SPECCTRA in the nineties. The guys who wrote SPECCTRA went to the VLSI world iirc and never came back - I guess that's where all the fame and fortune is. Probably was a patent minefield for a while there too; might still be. Autoplacement was an utterly intractable problem back then, still seems to be, and it's probably ripe for a generative AI approach - a good generative AI first pass at parts placement could be a net time saver. The biggest problem would be convincing the cranks that you can be good even if you can't be perfect.
I'm sort of puzzled by the kids out there trying to do schematics-as-code. On the one hand, I would love to see that work as a backend format - especially where folks like the jitx guys seem to be making a lot of progress in encoding appnote and datasheet-level design rules into parts models. Reading every datasheet in the detail needed for commercial designs is more work than people expect, and so is getting junior engineers on board with the process. So any automation there is beneficial. The problem is that the approaches all seem rooted in this idea of schematics as data entry for layout, a kind of source code, rather than as design documentation with a carefully evolved visual language that needs to be accessible to people who won't have an EDA suite installed on their computer. I guess people who learned from puzzling out schematics in the Adafruit/Sparkfun/Shenzhen style with explicit wiring minimized might not understand the value of a good schematic.
The other thing is leaning too far into thinking-by-analogy and trying to make PCB-level design like VLSI design. I don't think it's entirely impossible - if we had better tools for DRC and validation, component-level design would look more like VLSI design. But the design space is so different, with such loose coupling between design, EDA/CAM/simulation, validation, fabricators, assemblers, component vendors, regulators/certifiers, and so on that just getting one corner of it really right would be a major accomplishment.
In general, impedance controlled UHF design work with domain specific simulation tools is the modern workflow. Thus, one manually does the critical tracks first, island-pours, and finally power interconnects.
KiCad is slightly better than nothing for layout, and trying to make it into yet another half-baked simulation tool seemed silly. =3
If I need to sim, I’ll just use LTspice or TI TINA - I.e. the simulation ecosystem that includes the part model I need, with no wrangling to get that spice model to play nicely in some other spice toolchain.
https://web.archive.org/web/20230214034946/http://www.spectr...
For learning, the current LTSpice models and QUCS VNA profiled device imported simulation data are reasonable tools as well.
I used KiCad all the time for personal projects, and it has recently already addressed the major issues people found annoying:
1. unstable library version dependency management
2. complex design collaboration over traditional versioning tools like git (still an upstream feature last I checked, but should be available soon.)
3. freezing the specific KiCad and library path structure versions across a project, as assuming every design has a single workstation workflow is a contradictory goal
KiCad is better than most current EDA software, but that is mostly a sad reflection on the traditional tools available. =3
https://forum.kicad.info/t/tutorial-how-to-enable-git-in-kic...
Or the plugin and content manager under the tools menu for the git-gui module...
YMMV =3
These last five years have been absolutely incredible to watch in Kicad’s development. Last two releases have added the two major things that Kicad didn’t have that professional CAD tools did:
* database support
* outjob features
Beyond that, it’s really just a matter of adoption and end user preference of how to put these features to work. (Databases tend to come with a lot of internal corporate bureaucracy around organizing data than anything else.)
But, for the topic at hand: don’t you think Kicad is already moving sort of in the direction you talk about, as far as workflow to speed up layout is concerned?
Best example I can think of is the “trace autocomplete” feature in I think 7.0. Hit hotkey (I think it’s F in pcbnew) and the trace will be laid for the track currently being placed. Combine this with the “route from other end of track” (hotkey E), this is a blazing productivity boost, especially when you’re working across two different ball out grids.
Version 9 enabled buses/multiple tracks to be draggable, meaning this whole system could go even faster still.
Many times when starting a design my placements aren't set and that has a huge impact on routing.
Honestly, I’d trust an autorouter to complete a good chunk of my designs if I can get to a placement I’m satisfied with, and I can give the autorouter some constraints on where to route. For example: did a board last year using an NXP iMX8MP with an eMMC. The peripheral ballout on the processor matches the eMMC ballout really nicely - was just a matter of lining the chips up and drawing the lines. An autorouter could have done in seconds what it took me ~10 minutes if it had known to keep everything in the data bus on the top layer.
I think that’s a success criteria autorouter projects suffer from: they don’t consider their autorouter “done” unless their router can do _everything_ on a board. As a working EE, I don’t actually want that. I want an autorouter that can work with me to do a satisfactory job on a little chunk of the design at a time, give me some time to review it for correctness, then move on to the next chunk.
That would be killer. Especially if you could give constraints beyond layers: I.e “keep all nets with names D0-7 on layer one and three, and match their length to 5mm of one another. Use D0 as the net length reference.” If you can do that, then boom: you’ve solved DRAM length tuning. Huge range of design complexity now within reach of the unwashed masses.
OP - if I can find some time to show you what I mean, I’d be more than happy to give you a demo of what I’m talking about.
I do think humans need to be in the loop on autorouters, but I also think autorouters should be very fast and decent. In web design we progressively add constraints (via CSS) to tweak the output of the visual design and see the result instantly, I believe this same workflow is possible for PCB design (and will favor those good at specifying constraints!)
I don’t think the goals of “human in the loop” and “fast and decent” are mutually exclusive, or really even at odds with each other.
I just think that most autorouters I’ve tried seem to have been written by CS people who thought PCB layout was an interesting problem to solve - without a particularly deep understanding of what working EEs care about in terms of how you route the board.
Case in point: plenty of autorouters seem to work on the paradigm of “all tracks routed? Great. I’m done.” Never mind that everything is two layers of spaghetti: no ground plane, no power planes, no regard for SI or EMC compliance. In other words: pure hell to test and certify.
Not trying to be crotchety EE by saying this - I can give a good example of what I mean in real time. I also feel like I’m a bit of the exception in the EE community in that I think this is a solvable problem. I just don’t think many CS people have really bothered to consult any real working EEs about their workflows and the downstream portion of test and certification.
The point of Monte-Carlo method is that you can tradeoff accuracy for speed. The longer you run your algorithm the more accurate you get.
But what's even more interesting is that we can often use the contrapositive : You can get shitty accurate result real fast.
Instead of exploring all paths, you explore only one path picked at random.
And that's where it shines : when you put it into the most imbricated loop of your algorithm.
For example when you want to learn a neural network which autoroute. Typically you would have the outerloop which is updating the neural network parameter, and the inner-loop which is computing a path through the graph.
When you use Monte-Carlo this inner-loop which control accuracy can be reduced to 1 iteration if everything is bias-less, you will have a slower outerloop due to higher variance but the machine learning should """theoretically""" learn.
This is what allows you to build policies which intuitively always pick the right decision. Like in chess or go, you have some monte-carlo tree search variant like alpha-go-zero or alpha-chess-zero or alpha-router-zero, where even without the search part, the giant cache (encoded by the neural network parameters), once trained, can compute your best guess path in one pass through your neural network aka constant time (with a constant which can easily trade memory for speed by augmenting the number of parameters or training for longer).
Yeah, I had the exact same reaction when I read the article.
MC is the "keeping it real" algorithm, slow, but almost always dead simple to implement and that can be trusted with super high certainty to double-check that you've not completely wandered off in the weeds.
Source: I once wrote a simulated annealer for VLSI wiring. It was cool.
1. Plop the labels down in random* directions.
2. Calculate the total overlap.
3. Randomly* move 10% of the labels.
4. Calculate the total overlap again, If the overlap is improved, keep your changes, If things got worse, then backtrack.
5. Sometimes you randomly keep changes that make things worse, this is the annealing part. This is how you escape local minima.
* The randomness does not have to be 100%, you can try smartly avoiding nearest neighbors 2/3 of the time and then do a random move 1/3 of the time. If you always try and be smart you will tend get trapped in a local minima.
Routing itself is easy. It's when the router has to tear up stuff it already routed to fit new stuff in that things get complicated and the combinatorics start to get you.
I miss the autorouter KiCAD used to have. It was taken out for iffy IP reasons (the author had worked for an autorouting company). Reaction to users who wanted it back were along the lines of "Real Men Don't Use Autorouters".[1]
[1] https://forum.kicad.info/t/autorouting-and-autoplacement/185...
That said a good autorouter should also be useful to professionals! So hopefully we help with that as well!
However, as one of the cranky old people who don't really want to see KiCad expend any energy on autorouters, PCB autorouters are a pain in the ass that never work.
We can look at VLSI autorouters to determine why that is. VLSI autorouters also were a pain in the ass that never worked. But what happened was that VLSI suddenly got lots of layers and you could dedicate a layer to routing vertically, a layer to routing horizontally, a layer to routing power and still have a couple of layers for global vertical interconnect, global horizontal interconnect, and global power.
The fundamental problem with PCB autorouting is that PCBs have WAY more obstructions than VLSI chips do. First, components themselves are obstructions and choke points. Second, PCB vias almost always obstruct all the layers of the board while VLSI vias only obstruct the two layers being connected. Third, PCB vias tend to be bigger than your interconnect metal width. Fourth, the number of PCB layers in use is way smaller than the number of layers in VLSI--the most common numbers of layers are 4 layers (most projects--of which only 2 are really used for general routing), 2 layers (because of cost engineering--good luck autorouting these) and 6 (a tiny minority).
It all adds up to PCB autorouting being a significantly more complicated job than VLSI autorouting.
I don't think that's true. Perhaps by number of PCBs made 2 and 4 layers dominate: all those IoT doohickeys and alarm clock displays. And even single layer phenolic boards. And for most hobbyist work with little MCUs, 4 layers is a sweet spot. But they're usually either very simple devices where routing is not the problem, or they have very tight application constraints where the placement has to be squeezed and squeezed.
But much of the effort in designing PCBs in industry is on 6+ layers. You can probably smash out a simple smart lightswitch PCB in a day. Boards with BGA SoCs, DDR, PCIe and FPGAs can take whole teams months or more and have incredibly complicated constraints, many of which are implicit (the very simplest: put cap near pin, test points inline, make diff pair via symmetrical, keep this SMPS far from sensitive things, and make the inductor loop as tiny as you can). There are a million ways to make a PCB pass DRC and still be a completely non-functional device. In particular, routing is secondary in terms of effort and importance to placement.
If you sample what a random PCB engineer is working on, it's quite likely to be lots of layers, or an extremely tight layout, and probably both. Or something weird and application-dependent like high voltage. And they're probably fiddling with placement at the random sample time you choose.
Toy examples of sparse layouts like mechanical keyboards and DIP ICs are very unrepresentative of where the real effort, and money, goes.
I'd thought of KiCAD as a hobbyist tool. It didn't have the intensive library support of a pro tool. Footprints and 3D models were user submissions and not well curated or tested. Footprints might need some manual tweaking. With a pro tool, you're paying for a small army of people doing data entry on part info. Has KiCAD improved in that area?
With every release of KiCad, the difference between KiCad and Altium gets smaller and smaller.
> With a pro tool, you're paying for a small army of people doing data entry on part info.
That has never been my experience using any PCB tool (even the truly big boys like Expedition or Allegro). Any libraries managed by an entity external to the people designing PCB boards have always been a disaster. If you haven't personally put something on a board, it is, practically by definition, broken.
Anyone using a "pro" tool (Expedition or Allegro) has their own army of people managing their library.
I could rant yet again about how Altium screwed themselves by encrypting libraries so that you had to subscribe to them, but I'm tired of that rant.
In Altium's stead, KiCad has been eating up the bottom end more and more. There are still some missing features (copper on inner footprint layers, flex board stackups), but they get fewer and fewer with each release. And Autodesk's moves with Eagle have hastened even more people onto KiCad.
No tool has libraries that never need tweaking for a design. 99% of KiCad library parts will never need touching in a usual "simple" design, but no one library part can cover all eventualities. Any tool that promises a library that doesn't ever need to be touched is lying or naive. You should also always check footprints wherever they come from, even if you paid for them.
KiCad has a new library team, and the parts in the library are revolutionarily better then they were say, 5 years ago.
I think routing is basically an image transformer problem without a good dataset right now. If the eye of the giant AI companies turned to autorouting and large synthetic but physically simulated circuit datasets were created, we would basically be done and autorouting would be a solved problem.
This means that all the work I’m doing now on heuristic algorithms, as well as all the hard work done by humans, will probably not be needed in the future. I just don’t see autorouting as being more difficult (in terms of solution space size) than the art being produced by transformer models right now.
I’m saying this because you’re right, these heuristic algorithms can only get us so far- the problem is really difficult. But our human intuition, the magic black box operation we do, doesn’t seem to be too far beyond the emerging transformer image models.
While AI-generated art is full of small defects which people are just ignoring, who cares about non-natural shadows or unrealistically large fingers.
It is possible to iteratively combine AI with DRC checker and loop until all is good, but it's not obvious to me that this would be performant enough, or if this system will stay in some sort of local minimum forever once the circuit is complex enough.
I think spatial partitioning can help solve issues with minor DRC violations as well- it should be easier to correct an image than to generate one from scratch. But I’m not even sure it’ll be necessary because of how coherent image models already are.
And you don't even get to use unit tests to check correctness.
Sure, maybe, given a super-massive amount of data.
I see it as the difference between "I want a photo-realistic portrait of a beagle" and "I want a photo-realistic portrait of my neighbor's beagle, Bob". The first one there's some general rules as to what makes something a 'beagle' so is not too hard while the second has specific constraints which can't be solved without a bunch of pictures of Bob.
To address the specific issue, an AI would have to learn the laws of physics (aka, "Bobness") from a bunch of pictures of, essentially, beagles in order to undertake the task at hand.
I forgot the name unfortunately but there was a project recently where they made AI + physically correct modeling.
EDIT found it, the Genesis project: https://x.com/zhou_xian_/status/1869511650782658846
I think maybe the best way to get this data set is to subsidize a few dozen electronics recycling centers for every unique microCT scan they send you. Lease them the tomography equipment. They increase their bottom line, you get a huge dataset of good-to-excellent quality commercial PCB designs.
My approach is slightly different for building the dataset. I think we should bootstrap an absolutely massive synthetic dataset full of heuristically autorouted PCBs to allow the AI to learn the visual token system and basic DRC compliance. We then use RL to reward improvements on the existing designs. Over time the datasets will get better similar to how synthetic datasets are produced whenever a new LLM model is released that make training subsequent LLMs easier.
I think people are underestimating the number of PCBs that are needed to train a system like this. My guess is it is well over 10m PCBs with perfect fidelity. It will make sense to have a large synthetic data strategy.
Some quibbles or glossovers:
> A recursive algorithm is a depth first search. Any loop that explores candidates/neighbors without sorting the candidates is a BFS.
Not sure what to say about this. It's either wrong or I'm missing the intuition. Both DFS and BFS can either be written iteratively or recursively; the real difference is whether you pop your next candidate from the top or the bottom of the stack. Equivalently, whether you use a stack (FILO) or a queue (FIFO). That said, I took maths rather than CS so verify this for yourself.
> It is simply the best foundation for any kind of informed search (not just for 2d grids!)
A* is useful in pathfinding if you have some notion of easily-computed "distance" to your desired target and you're only running a few queries for any given graph. If you plan to run many queries on a mostly-static graph (like a road network) then you could be better off applying a pre-processing algorithm like contraction heirarchies. If you're optimising but don't know what target you're aiming for (e.g. Traveling Salesman) then using some other local search heuristic like 2-opt may be better.
> The major difference between these [A* and BFS] is BFS explores all adjacent nodes, while A* prioritizes exploring nodes that are closer to the destination.
This is definitely a difference. But it glosses over the biggest difference, which is that A* is a dynamic algorithm. That allows you to terminate early with confidence that you have found the shortest path. With BFS you may not be certain until you've searched the whole graph, which could be vast.
The intuition there is that people tend to write algorithms recursively when they easily map to interactions with the top of the stack, since that's easier to express in most languages than bringing in an external stack to think about. Hence, if you see something recursive in the wild it likely looks more like DFS than BFS. As you point out, that isn't a hard rule.
A* is not limited to point-to-point, it also handles point-to-set-of-points just fine (you just might have to reverse the direction you're thinking in). Although this is most often used for "we only need a path to one of them", it's still faster even if you need a path to all of them due to merging. Visiting all points (as BFS does) on the graph is rarely what you actually want (though admittedly, if you're able to throw away the grid it becomes more likely; like the article, I do find people use grids far too often).
BFS works just fine with variable edge weights, you just have to use a priority queue instead of a plain queue for the next set of nodes to be visited.
Are you sure it works correctly for shortest path to all points in set not just "we need one of them" case? When running in reverse I would expect closest point to be priotized first, which would potentially mark all the area around target as visited thus blocking paths from other points in set from reaching it. It's equivalent to introducing one more point in graph and connecting it to all the points in set. Unless it works for one to all in set it's a weaker result than what Dijkstra or BFS answers. If your problem doesn't need it, don't use it. But that's my point use the weakest algorithm which directly answers required query, and be aware that building more powerful algorithm by repeatedly calling more specialized but weaker one won't always be optimal.
For road network routing (which is more my thing) you are orders of magnitude better off pre-processing the graph with contraction heirarchies [1] and then using a specialised algorithm such as RPhast [2] at query time.
[0] https://www.semanticscholar.org/paper/Heuristic-search-for-o...
[1] https://en.wikipedia.org/wiki/Contraction_hierarchies
[2] https://www.microsoft.com/en-us/research/publication/faster-...
But usually you don't call it BFS if it's using a priority queue?
As you note later in your comment, this is only true in the case of uniform edge weights. In general you can come up with easy counterexamples e.g. a very long direct edge from source to target vs lots of short edges which sum to a smaller amount.
> Any time you’re using a tree you’re ignoring an O(~1) hash algorithm for a more complicated O(log(N)) algorithm
This is incredibly misguided. The hashing approach is fine if your points are evenly distributed and you only really want to query an area relatively close to the fixed subdivisions you've chosen, otherwise your 0(1) is going to degenerate to O(n).
Trees are an informed representation when you don't know how your data is distributed.
Similar story for randomized algorithms. What happens when your search space is many trillions of items or possibilities? Or there are no heuristics to be had? If you can't brute force it and you can't use a clever algorithm then randomized algorithms are a savior.
Maybe these aren't needed for this specific application, but best to not make sweeping statements.
But more seriously I think tree based algos tend to be overhyped, and people get a bit absorbed into thinking about big-O behavior when the constant factors are super important even when you’re looking at hundreds of thousands of elements. Not to mention stuff like data locality. Like sometimes your computer will be faster at just looking in a seq scan rather than dealing with the bookkeeping of a fancier structure. Sometimes.
I think a nicer argument overall is to build a tiny wrapper around your operations, build out what is easy, and then figure it out through measurements.
Worst case you end up entirely rewriting your program to accommodate a different structure to try for better perf but in my experience rewriting entire files from scratch tends to get you a good number of improvements for free.
But trees et al also make a lot of sense as a default. If you need the performance to scale in a roughly linear manner you need to be smart about it. This is what sinks a lot of software, usability wise.
I have yet to find a satisfactory way to store points (in 2d or 3d) and be able to query nearby points. a kD tree is nice, but I want to add points as I go, not build a structure around a fixed set.
The top level should usually use a grid, though uniform-density children can also use one. The grid scale should be chosen to maximize low-density children while minimizing empty children.
For low-density children, just store an unsorted bunch of items, or maybe sort them on just one axis (preferably the outermost axis used for rendering). If your outer level is actually uniform, all of its children will be this (or empty).
For high-density children, use a quadtree/octree ... or just treat them as opaque objects to be handled recursively. These still suck for all the memory-walking they do, but since you've handled the outer part using the grid, the overhead is smaller. These can do "nearby" queries if your implementation is good, but many implementations suck. Making them mutable just means you need to either implement rebalancing or just stick to fixed planes.
A*, Lee's algorithm and the like are all cool. It's criminal to write any kind of floodfill without having an accompanying visualization, you're squandering so much dopamine.
This article has me wondering if all the gamedev things I *didn't* read about but are adjacent have utility in this kind of thing. I can't be the first person to think a boids router would be pretty fun. More seriously, I bet jumpflooding signed distance fields would provide you a lot of power.
Everything about spatial hashing in particular matches my experience. Haven't found many occurences in almost 2 decades where any of the tree structures are worth the time. One notable exception. The lovecraftian text editor I made uses quite a lot of trie's for reactivity things. Nice way to compress 45,000 words into a compact state machine for event handling.
I hadn't heard of jumpflooding (for others: fast, parallel algorithm for approximating distance fields), that could definitely be interesting, thanks for the tip!
(It is perhaps worth noting: broadly speaking, "recursive" vs. "non-recursive" is a bit of an artificial distinction. The real question is "does a pre-baked algorithm with rigid rules handle flow control, or do you?" If you care about performance a lot, you want the answer to be you, so having your run state abstracted away into an execution-environment-provided stack that you can't easily mutate weirdly at runtime begins to get in your way).
And then once you've used the playful, expressive (interpreted, abstract, slow) language you enjoy using, to develop an excellent, performant algorithm... if performance still matters, write the same thing again in a performant, low-level language, and perhaps even write architecture-specific assembly.
There's a reason numpy, pandas, OpenCV, TensorFlow aren't written in pure Python, but instead let Python tell them to do things they've implemented in high performance C++/assembly/CUDA, etc.
No matter how proud the authors would be of exploring a problem space and coming up with an efficient algorithm (and blogging about it), I doubt they'd be popular numerical computing libraries if they insisted on it being written in pure Python, or Javascript.
While this is a fun blog post, I don't think it'd have the same takeaways if the author's algorithmic insights got a pure-Javascript HEVC encoder down from 1 day per frame to 3 hours per frame...
IMHO, at the very least the current CS degree needs to be split up - CS should be split into various smaller (and faster achievable) subdegrees - the fancy math stuff should be its own degree, possibly be fused with a new degree relating to AI, database and network theory should be its own degree, and frankly stuff like low-level assembler as well, and the "how do electronic components, NAND gates, boolean logic and whatnot work" moved off to electronics engineering. And what the market actually needs the most - people who can crank out CRUD app stuff - should be either its own degree if one insists that this needs academic knowledge, or be moved off to something like trades education.
In parallel, the job requirements gatekeeping should be tackled by laws - companies must no longer be allowed to require degrees that have little to no impact on the actual job duties. It's forcing our children to waste years of their life and take on five to six figures worth of debt, all only for companies to be able to weed out people.
> I’m controversially writing our autorouter in Javascript. This is the first thing people call out, but it’s not as unreasonable as you might expect. Consider that when optimizing an algorithm, you’re basically looking at improving two things:
> Lowering the number of iterations required (make the algorithm smart) Increasing the speed of each iteration
It may be true in this domain, I wouldn’t know, but applied to software engineering in general IMO it would be a massively incorrect assumption to say choice of language doesn’t affect speed and needed number of iterations.
Poor cache/memory/disk accesses can result in constant regressions so vast that a 'worse' algorithm actually fares better. Also, we tend to falsely settle on referring to 'O' rather than omega, theta, and average, even when we don't care about worse-case or contrived workloads. See quicksort and mergesort.
For a similar concept in another domain, see also the external memory model:
There are specific NP Hard problems that aren't spatially cacheable and we may need WASM for, but these problems tend to be nearly impossible for humans as well, so PCB designers will opt to just add some space on the board or add additional layers.
On the topic of autorouters, the post doesn't really go into why there are no open source data sets and why there aren't many open source tools. It's probably no surprise but there's a TON of money in it. The author mentions iphones and 10s of thousands of routes. That's nothing. Modern ICs have billions of transistors and billions of routes with an insanely complicated rule set governing pitch, spacing, jobs, etc. Not to mention the entire time you have to make sure the parasitic resistance and capacitance from the routing doesn't break the timing assumptions.
Cadence and Synopsys probably put billions yearly into R&D and I bet a big chunk goes into "Place & Route". I've heard that licenses for their tools run into the miions of dollars per head per year.
I love the concept of schematics-as-code. But layout-as-code is awful. Maybe AI tools will improve it enough - but unless there's a user friendly escape hatch which lets you tweak things with a mouse, it's a dead end.
I've had the same problem with auto-generated wiring diagrams and flowcharts. Great in theory, but then the tool generates some ugly thing you can't make sense of and there's no way to change it.
There's definitely tremendous potential for AI in electronic design. But I think the really complicated bit is component selection and library management - you have to somehow keep track of component availability, size and a range of parameters which are hidden on page 7 of a PDF datasheet. Not to mention separate PDF application notes and land patterns.
Compared to all that, connecting up schematic lines and laying out a PCB is fairly straight forward.
1. Autrouting in particular is a problem well-suited to visualization and not all problems are. it's fundamentally about real things in real space (bonus: they're mostly constrained to a 2D plane, which makes them easy to visualize on a computer screen). A lot of hard and interesting problems don't match that reality and so aren't as amenable to visualization.
(... but the general advice, "look for opportunities to visualize," stands. Your eyes and brain are very fast matchers for a broad variety of patterns in a way a computer isn't without a lot of special-purpose, tuned software).
2. JavaScript, between the language itself (specifically the ability to reflect data structures at runtime) and the ecosystem around it, is very amenable to visualization. In that way specifically, it was a strong choice for this problem domain. Imagine how much work you'd be taking on if you decided to "just bang out a quick visualization" for an arbitrary piece of the algorithm in C++, or Rust.
Hmm, I wonder what python raylib is like, that might be the ticket.
Being able to just plot something and see the algorithm on-screen has been an incredible benison.
I'm hopeful OpenPythonSCAD will be merged into OpenSCAD presently.
I have a python workflow for my laser cutter, I can make a drawing with drawsvg and then cut it with lightburn.
I have used build123d to make 3d models in python but not yet found a way to generate toolpaths for my milling machine.
Give the code a try!
https://github.com/WillAdams/gcodepreview
If you can describe a simple part I can work it up for you and add it to the examples.
EDIT: Alternately, try pyCAM, or Kiri:Moto w/ an STL?
For example you might say "generate an interactive react app that implements a simple pathfinder that uses A* with a penalty for going near edges". You can also just drop in a paper or pseudo code of an algorithm you're ideating on.
Here's an example how we use react cosmos for our autorouter[2]
[1] https://reactcosmos.org/docs [2] https://unraveller.vercel.app/?fixtureId=%7B%22path%22%3A%22...
Though admittedly animations are quite arduous.
I have a few major nags about current autorouters:
1. No way to prioritise connections.
2. Handling of differential pairs or buses is terrible.
3. Does not use best practices for high-speed signal lines.
4. No way to add arbitrary rules to the autorouting process, i.e. block a trace from going through an area. For example, maybe you don't want a power trace being autorouted under your IMU, but some low voltage signals are fine.
I've use freerouting [1] in the past but it has a somewhat difficult time of being maintained. Freerouting seems to really struggle with larger designs, it seems to be greedily routing traces and spends ages trying to fix those early placed traces.
BFS is also a recursive search. Even in the case of non-recursive search, the only difference is whether you use a queue or stack.
Apart from that, great article.
Write a recursive BFS in Haskell and it won’t blow up the stack.
Java, Go, JavaScript all lack it.
I'm not sure how performant modern day JS is but aren't you still leaving a lot of performance improvements on the table? Sure, algorithm may matter more but who says you can't use the same algorithm.
I strongly believe that the CAD world including EDA is at the verge of disruption by an AI or more correctly Machine Intelligence (Constraint Programming - CP to be exact) very similar to how LLM disrupting the Chatbot technology [1],[2].
The path to this is most probably by solving the autorouting mechanism with CP, a deterministic logic, optimization and constraint programming machine based intelligence [3], [4], [5],[6].
Fun facts, the modern founder of logic, optimization, and constraint programming is George Boole, the grandfather of Geoffrey Everest Hinton, the "Godfather of AI" and "Godfather of Deep Learning".
[1] Most AI value will come from broad automation, not from R & D (313 comments):
https://news.ycombinator.com/item?id=43447616
[2] Diagrams AI can, and cannot, generate (68 comments):
https://news.ycombinator.com/item?id=43398434
[3] Constraint programming:
https://en.wikipedia.org/wiki/Constraint_programming
[4] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[5] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[6] High-Quality Ultra-Compact Grid Layout of Grouped Networks [PDF]:
https://ialab.it.monash.edu/~dwyer/papers/gridlayout2015.pdf
I feel people are getting way to comfortable throwing randomness at things, pretending it's perfectly fine or even beneficial because you can reason over the probability distributions and then shrugging their shoulders when the results have unpredictable and impossible to debug edge case behavior.
We already have enough unpredictable factors in a program, I feel we don't have to purposefully add even more of them.
https://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter14_JP...
10000000000 random guesses can be way faster than a clever calculation.
And maybe there is no clever calculation.
Are state-of-the-art autorouters still based on A*?
From what I remember A* was used in the 80s before Specctra introduced a cost based push and shove capable approach in the mid 90s.
A* being what it is, is probably still a part of contemporary solutions, but I am surprised about the emphasis it got in the post.*