I am confused by this without context. I have not heard of blade, but am aware that Zed built its own GUI library called GPUI. Having used Zed, this is a vote of confidence: The crate ecosystem is historically filled with libaries which try to be The future of X in rust but are disconnected from practical applications. GPUI by nature is not that; it's a UI lib built to a practical and non-trivial purpose. It sounds like Blade is a cross-API graphics engine, by one of the original gfx-HAL (former QGPU name) creators?
I have not used GPUI beyond a simple test case, but had (prior to this news?) considered it for future projects. I am proficient with, and love EGUI and WGPU. (The latter for 3D). I have written a library (`graphics` crate) which integrates the two, and I use for my own scientific applications which have both 2D and 3D applications. Overall, I'm confused by this, as I was looking forward to using GPUI in future applications and comparing it to EGUI. I have asked online in several places for someone to compare who's used both, but I believe this to be a small pool.
I was not sure of the integration between GPUI and WGPU, which can confirm EGUI and WGPU have great integration. But I only care about this because I do 3D stuff; if I were not, I would be using eframe instead of WGPU as the backend.
Unrelated, off-topic, but I'm also not sure where to ask this: Am I missing something about Zed? I have tried and failed to get into it. I really want to like it because it's so fast [responsive], but it seems to lack basic IDE functionality in Python and Rust, like moving structs/functions, catching errors dynamically, introspection and refactoring in general etc. I thought I might be missing some config, but now lean that it's more of a project-oriented text editor than true IDE in the fashion of JetBrains. But I have been unable to get a confirmation, and people discuss it as if it's an IDE or JB alternative.
Cool, I haven't seen `graphics` before when I was looking for a simple UI/3D visualization option after rend3 has been abandoded. Have been considering bevy/egui too but seems more effort to learn
I am one plugin away from moving to it directly instead of vscode. I really like it. It’s fast. It gets updates seemingly daily. I’ve never had it crash. It integrates LLMs well. It’s everything I wish vscode was if it were native.
I installed Zed a few days ago and have been trying to get acquainted myself.
It has far less built-in features for refactoring than other editors you might be coming from. It's handled at the LSP level, get the LSP for your language and hit cmd+ to see what it can do. I'm not working in Python or Rust at the moment (Elixir), but I'm sure they have some good extensions.
I don't get the question. albeit in vim I use just the navigation things and selectors and s/../.. to replace stuff I am probably using something like 1% of it's power.
I'm asking how to move a function, or class to a different module (including its methods, imports throughout the project etc), as an example of IDE-101 stuff I can't figure out how to do in Zed, and makes me think Zed might [i]not[/i] be a replacement.
That’s a code intellisense feature set, not part of the core set of an editor. Especially when you have dynamic module loading. An IDE only focus on a few languages and it makes sense for them to have that capability.
Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented. I think the advent of LLMs has been timed perfectly to set the ecosystem back for a few more years. I wrote about it, and how it affected our development yesterday: https://tritium.legal/blog/desktop
Interesting read, however as someone from the same age group as Casey Muratori, this does not make much sense.
> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.
Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.
> To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.
Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?
Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.
I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.
Event based or loop based is separate from retained or immediate.
The canvas api in the browser is immediate mode driven by events such as requestAnimationFrame
If you do not draw in WM_PAINT it will not redraw any state on its own within your control.
GDI is most certainly an immediate mode API and if you have been around long enough for DOS you would remember how to use WM_PAINT to write a game loop renderer before Direct2D in windows. Remember BitBlt for off screen rendering with GDI in WM_PAINT?
It's like the common claim that data-oriented programming came out of game development. It's ahistorical, but a common belief. People can't see past their heroes (Casey Muratori, Jonathon Blow) or the past decade or two of work.
I partly agree, but I think you're overcorrecting. Game developers didn't invent data-oriented design or performance-first thinking. But there's a reason the loudest voices advocating for them in the 2020s come from games: we work in one of the few domains where you literally cannot ship if you ignore cache lines and data layout. Our users notice a 5ms frame hitch- While web developers can add another React wrapper and still ship.
Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.
When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.
I don't understand this part of your comment, it seems like you're replying to some other comment or something not in my comment. How am I overcorrecting? A statement of fact, that game developers didn't invent these things even though that's a common belief, is not an overcorrection. It's just a correction.
Ah, I read your comment as "game devs get too much credit for this stuff and people are glorifying Casey and Jon" and ran with that, but you were just correcting the historical record.
My bad. I think we're aligned on the history; I was making a point about why they're prominent advocates today (and why people are attributing invention to them) even though they didn't invent the concepts.
I don't really like this line of discourse because few domains are as ignorant of computing advances as game development. Which makes sense, they have real deadlines and different goals. But I often roll my eyes at some of the conference talks and twitter flame wars that come from game devs, because the rest of computing has more money resting on performance than most game companies will ever make in sales. Not to mention, we have to design things that don't crash.
It seems like much of the shade is tossed at web front end like it's the only other domain of computing besides game end.
I mean... fair point? I'm not claiming games are uniquely performance-critical.
You're right that HFT, large-scale backend, and real-time systems care deeply about performance, often with far more money at stake.
But those domains are rare. The vast majority of software development today can genuinely throw hardware or money at problems (even HFT and large backend systems). Backends are usually designed to scale horizontally, data science rents bigger GPUs, embedded gets more powerful SoCs every year. Most developers never have to think about cache lines because their users have fast machines and tolerant expectations.
Games are one of the few consumer-facing domains that can't do this. We can't mandate hardware (and attempts at doing so cost sales and attract community disgust), we can't hide latency behind async, and our users immediately notice a 5ms hitch. That creates different pressures- we're optimising for the worst case on hardware we don't control whilst most of the industry optimises for the common case on hardware they choose.
You're absolutely right that we're often ignorant of advances elsewhere. But the economic constraint is real, and it's increasingly unusual.
I think we as software developers are resting on the shoulders of giants. It's amazing how fast and economical stuff like redis, nginx, memcached, and other 'old ' software are written decades ago, mostly in C, by people who really understood what made them run fast (in a slightly different way to games, less about caches and data, and more about how the OS handles low level primitives).
A browser like Chrome also rests on a rendering engine like Skia, that has been optimized to the gills, so at least performance can be theoretically fast.
Then one tries to host static files on a express webserver, and is suprised to find that a powerful computer can only serve files at 40MB/s with the CPU at 100%.
I would like to think that a 'Faustian deal' in terms of performance exists - you give up 10,50,90% of your performance in exchange for convenience.
But unfortunately experience shows there's no such thing, arbitrarily powerful hardware can be arbitrarily slow.
And as you contrast gamedev to other domains who get to hide latency, I don't think its ok that a simple 3 column gallery page takes more than 1 second to load, people merely tolerate this not enjoy it.
And ironically I find that a lot of folks end up optimizing their React layouts way more than what it'd have cost to render naively with a more efficient toolkit.
I am also not sure what advances game dev is missing out on, I guess devs are somewhat more reluctant to write awful code in the name of performance nowadays, but I'd love to hear what advances gamedev could learn from the broader software world.
The TLDR version of what I wanted to say, is I wish there was a linear performance-convenience scale, where we could pick a certain point and use techniques conforming to that, and trade two thirds of the max speed for dev experience, knowing our performance targets allow for that.
But unfortunately that's not how it works, if you choose convenience over performance, your code is going to be slow enough that users will complain, no matter what hardware you have.
It clearly didn’t come out of game dev. Many people doing high performance work on either embedded or “big silicon” (amd64) in that era were fully aware of the importance of locality, branch prediction, etc
But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.
In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.
I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.
There's also good reasons that immediate mode GUIs are largely only ever used by games, they are absolutely terrible for regular UI needs. Since Rust gaming is still largely non-existent, it's hardly surprising that things like 'egui' are similarly struggling. That doesn't (or shouldn't) be any reflection on whether or not Rust GUIs as a whole are struggling.
Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...
I mean, fair enough, but [at least] wikipedia agrees with that take.
> Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.
Yeah no doubt you're correct. I wasn't disagreeing - just establishing the reasonableness of my original statement. I must have read it in the Dear ImGui docs somewhere.
It might be more accurate to say that he repopularized the term among a new generation of developers. Immediate vs Retained mode UI was just as much a thing in early GUIs.
It was a swinging pendulum. At first everything was immediate mode because video RAM was very scarce. Initially there was only enough VRAM for the frame buffer, and hardly any system RAM to spare. But once both categories of RAM started growing, there was a movement to switch to retained mode UI frameworks. It wasn’t until the early 00’s that GPUs and SIMD extensions tipped the scales in the other direction - it was faster to just re-render as needed rather than track all these cached UI buffers, and allowed for dynamic UI motifs “for free.”
My graying beard is showing though, as I did some gave dev in the late 90’s on 3Dfx hardware, and learned UI programming on Win95 and System 7.6. Get off my lawn.
Your recent post resonated with me deeply, as someone heavily invested in the Rust GUI I've fallen into this same conundrum. I think ultimately the Rust GUI ecosystem is still not mature and as a consequence we have to make big concessions when picking a framework.
I also came to a similar endpoint when building out a fairy large GUI application using egui. While egui solves the "draw widgets" part of building out the application, inevitably I had to restructure my app entirely with a new architecture to make it maintainable. In many places the "immediate" nature of the GUI mutable editing the state was no longer an advantage. Not to mention that UI code I wrote 6 months ago became difficult to read, especially if there was advanced layout happening.
Ultimately I've boiled my choices down to:
- egui for practicality but you pay the price in architecture + styling
- iced for a nice architecture but you have to roll all your own widgets
- slint maybe one day once they make text rendering a higher priority but even then the architecture side is not solved for you either
- tauri/dioxus/electron if you're not a purist like me
If your main gripe about the Rust GUI ecosystem is that it's not mature then rewinding 20 years and using Qt/WPF/etc sounds like an excellent alternative. Old and mature versus modern and immature.
> Rust GUI is in a tough spot right now with critical dependencies under-staffed and lots of projects half implemented.
Down the stack, low-level 3D acceleration is in a rough spot too unfortunately. The canonical Rust Vulkan wrapper (Ash) hasn't cut a release for nearly two years, and even git main is far behind the latest spec updates.
The underlying Vulkan API is updated constantly, the last spec update was about two weeks ago. Even if we only count the infrequent major milestone versions, Ash is still stuck at Vulkan 1.3, when Vulkan 1.4 launched in December of 2024.
Damn, I just dove back into a vulkan project I was grinding through to learn graphics programing, life and not having the time to chase graphic programming bugs led me to put it aside for a year and a half and these new models were able to help me squash my bug and grok things fully to dive back in, but I never even consider that the rust vulkan ecosystem was worse off. it was already an insane experience getting imgui, winit and ash to play nice together, after bouncing back and forth between WGPU, I assume vulkan via ash was the safer bet.
IIRC there is another raw vulkan library that just generated bindings as well and stayed up to date but that comes with its own issues.
Vulkano? I remember that! Looks like it was updated last week, but I don't know if it's current with the Vulkan API, nor how it generally compares to Ash.
WGPU + Winit + EGUI + EGUI component libs is its own joy of compatibility, but anecdotally they have been updating in reasonable sync. things can get out of hand if you wait too long between updates though!
Vulkano is a somewhat higher level library which aims to be safe and idiomatic. It looks like it generates its own Vulkan bindings directly from the vk.xml definitions, but it also depends on Ash, and this comment suggests that both generators need to be kept in sync so they're effectively beholden to Ash's release cadence anyway.
vk.xml[1] is the canonical Vulkan specification; this is updated essentially weekly.
The C++ equivalent, Vulkan-Hpp[2], follows extremely closely behind. Plus, ash isn't just an FFI wrapper; it does quite a bit of RAII-esque state and function pointer management that is generally required for Vulkan.
In my experience immediate mode guis almost always ignore internationalization and accessibility.
The thing you get by using an OS widget and putting a string in it is that the OS can interact with the string. It can read it out load, translate it, fill it in with a password, look it up in a dictionary, edit it right to left, handle input method editors whose hot keys are in conflict with app doing its own editing, etc…
There’s a reason why the most popular ImGUIs are targeted at game dev tools and in game dev uis and not end user uis
You could potentially make an Immediate mode gui that wrapped a retained gui. arguably that is what react is. From the programmers pov it’s supposed to look like imgui code all the way down. It runs into the issues of having to keep to two representations in sync. The ui represented by react and the actual widgets (html or native) and that’s where all its complications come from
Yes, one argument that I didn't make in the post but that does favor immediate mode is that you can somewhat straightforwardly convert from an immediate mode GUI to retained mode by just introducing your own abstractions. In some sense this makes you more disciplined about the FPS which could be a net win over all.
[Note that Tritium at least is translated into a number of a different languages. That part isn't that hard.]
This is why I'm using LLMs to help me hand code the GUI for my Rust app in SDL2. I'm hoping that minimizing the low-level, drawing-specific code and maximizing the abstractions in Rust will allow me to easily switch to a better GUI library if one arises. Meanwhile, SDL is not half bad.
Honestly I think all native GUI is in a tough spot right now. The desktop market has matured so there aren't any large companies willing to put a ton of money into new fully featured GUI libraries. What corporate investment we do see into new technologies (Electron, SwiftUI, React Native) is mainly to allow developers to reuse work from other platforms like web and mobile in order to cut costs on desktop development. Without that corporate investment I don't think we'll ever see any new native GUI libraries become as fully featured as Win32 or Qt Widgets.
I 100% agree on pretty much everything. The "webapp masquerading as a native app" is a huge problem, and IMO, at least partially because of a failure of native-language tooling (everything from UI frameworks to build tools --- as the latter greatly affect ease of use of libraries, which, in turn, affects popularity with new developers).
To be honest, I've been (slowly) working towards my own native GUI library, in C. It's a big undertaking, but one saving grace is that --- at least on my part --- I don't need the full featureset of Qt or similar.
My plan for the portability issue is to flip the script --- make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU). And on Android/iOS/etc, I can already do native anyway.
Though I should add that a native look is not a goal in my case (quite a few libraries already go for that, go use those! --- and some, like Windows, don't really have a native look), which also means that I don't have to use native widgets on e.g. Android. The main reason for using DOM on the web is to be able to provide for a more "web-like" experience, to get e.g. text selection working properly, as well as IME, easier debuggability, and accessibility (an explicit goal, though not a short-term one --- in part due to a lack of testers).
Though it wouldn't be too much of a stretch to allow either canvas or DOM on the web at that point --- by treating the web the same as a native platform in terms of displaying the widgets.
It's more about native performance, low memory use, and easy integration without a scripting engine inbetween --- with a decent API.
I am a bit on the fence between an immediate-mode vs retained-mode API. I'll probably do a semi-hybrid, where it's immediate-y but with a way to explicitly provide "keys" (kind of like Flutter, I think?).
Ok so it is not going closed source, they are just going to extend it as they need to drive Zed features. Totally understandable for an in-house UI framework, this is why you’d build one yourself anyway. I can imagine maintaining backwards compatibility, doing releases, writing documentation and growing a community around it is a considerable distraction from their product work.
Open source GUI development is perpetually cursed by underestimating the difficulty of the problem.
A mature high-quality GUI with support for all the features of a modern desktop UI, accessibility, support for all the display variations you encounter in the wild, high quality rendering, high performance, low overhead, etc. is a development task on par with creating a mature game engine like Unity.
Nearly all open source GUI projects get 80% of the way there and stall, not realizing that they are only 20% of the way there.
You're right, and I think that's because the core functionality of a UI lib is not too difficult. I've tinkered in that space myself, and it's a fun side project.
Then you start to think about full unicode support, right-to-left rendering, and so on. Then you start to think about properly implementing accessibility features. The necessary work increases by a magnitude. And it's not fun work. So you stall out with a bare-bones implementation.
I started writing a program that needed to have a table with 1 million rows. This means it needs to be virtualised. Pretty common in GUI libraries. The only Rust GUI library I found that could do this easily was gpui-component (https://github.com/longbridge/gpui-component). It also renders text crisply (rules out egui), looks nice with the default style (rules out GTK, FLTK, etc.), isn't web-based (rules out Dioxus), was pretty easy to use and the developers were very responsive.
Definitely the best option today (I would say it's probably the first option that I haven't hated in some way). The only other reasonable choices I would say are:
* egui - doesn't render very nicely and some of the APIs are amateurish, but it's quick and it works. Good option for simple tools.
* Iced - looks nice and seemed to work fairly well. No virtualised lists though.
* Slint (though in some ways it is weird and it requires quite a lot of boilerplate setup).
All the others will cause you pain in some way. I think the "ones to watch" are:
* Makepad - from the demos I've seen this looks really cool, especially for arty GUI projects like synthesizers and car UIs. However it has basically no documentation so don't bother yet.
* Xilem - this is an attempt to make an 100% perfect Rust GUI library, which is cool and all but I imagine it also will never be finished.
I wouldn't bother watching Makepad. They're in the process of rewriting the entire thing with AI and (it seems to me) destroying any value they has accumulated. And I also suspect Xilem will never be finished.
Beyond egui/Iced/Slint, I'd say the "ones to watch" are:
* Freya
* Floem
* Vizia
I think all three of those offer virtualized lists.
Dioxus Native, the non-webview version of Dioxus is also nearing readiness.
I’m currently writing an application that uses virtual lists in GTK: GtkListView, GtkGridView, there may be others. You ruled out GTK because of its looks I guess, I’m targeting Linux so the looks are perfect.
Yeah, I need cross platform, and GTK looks quite foreign on Windows/macOS IMO. I toyed with custom themes, but couldn't find any I liked for a cross platform look (wanted something closer to Fluent UI).
Not just because of its looks to be fair. Not being native Rust is a pain, and GTK only really works nicely on Linux. At least without a ton of effort to fix everything (I think some apps like maybe Mypaint have done that, but I don't want to).
I believe latest Iced versions do have a `Lazy` widget wrapper, but I believe that effectively means you need to make your own virtual list on top of it
Custom widgets aren’t particularly hard to do in iced, but I wish some of those common cases would be committed back / made available.
Except the above virtualised lists, another case I hit was layered images (sprites for example). Not very hard to write my own, sure, but it’d be nice to have that out of the box as in eg. egui
I've been somewhat involved in a project using Iced this week, seems pretty reasonable. Not sure how tricky it would be to e.g. invent custom widgets though.
Really? It seems better than ever to me now that we have gpui-component. That seems to finally open doors to have fully native guis that are polished enough for even commercial release. I haven't seen anything else that I would put in that category, but one choice is a start.
The problem is that Zed has understandably and transparently abandoned supporting GPUI as an open source endeavour except to the extent contributions align with its business mission.
I remember when that came out, but I'm not sure I understand the concern. They use GPUI, so therefore they MUST keep it working and supportable, even if updating it isn't their current priority. Or are you saying they have a closed source fork now?
Actually, this story is literally them changing their renderer on linux, so they are maintaining it.
> except to the extent contributions align with its business mission
Isn't that every single open source project that is tied to a commercial entity?
I don't know what the message means exactly, but I can't plan to build on GPUI with it out there, especially when crates that don't carry that caveat are suffering from being under-resourced.
I tried gpui recently and I found it to be very, very immature. Turns out even things like input components aren't in gpui, so if you want to display a dialog box with some text fields, you have to write it from scratch, including cursor, selection, clipboard etc. — Zed has all of that, but it's in their own internal crates.
Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.
Zed also stopped GPUI (their GPU accelerated Rust UI framework) development for now, sadly.
> Hey y'all, GPUI develoment is getting some major brakes put on it. We gotta focus on some business relevant work in 2026, and so I'm going to be pushing off anything that isn't directly related to Zed's use case from now on. However, Nate, former employee #1 at Zed, has started a little side repo that people can keep iterating on if they're interested: https://github.com/gpui-ce/gpui-ce. I'm also a maintainer on that one, and would like to try to help maintain it off of work hours. But I'm not sure how much I'll be able to commit to this
Using mainstream libraries instead of reinventing the wheel would have been a good decision with or without VC money.
I like Zed but it's still my secondary editor because it's missing usability features that I value in other editors. I think we all benefit if they focus their attention on the parts of Zed that differentiate it rather than writing new frameworks and libraries.
Yes, so I'm glad Zed at least did spend the time to reinvent the wheel, because it benefits everyone to focus on performance, not to mention we have a high quality piece of OSS at the end of it, as even if it's paused development for now, it can still be forked or otherwise iterated upon.
I think the parent meant that Zed could not have used an established UI library like GTK or Electron since performance was such a big focus of the editor.
You vastly overestimate the amount of pressure a board can place on an early stage startup. The far more likely scenario to me (someone who raised VC money) is that the CEO likely looked at their run rate and decided to prioritize things more aggressively. This is hardly surprising and it has nothing to do with VCs.
>What's that, doing actual work rather than labor-of-love open source stuff?
except the 'labor-of-love' stuff is what set the editor apart and why real users were choosing it and the 'actual business work' the moneymen are eager about is exactly what's in every other editor and what nobody asked for
They wrote GPUI as a business decision, to focus on performance, because they knew that that would be a core differentiator to all the other IDEs out there that use Electron for example. That they also liked writing it (as a "labor of love") is incidental.
Without such venture capital, I doubt GPUI, at least to the level of complexity it has today rather than being a toy project, would have even existed. It costs money to develop open source sustainably.
Companies start with founders funding themselves through savings and friends and family rounds before institutional investors are usually even interested. But make no mistake, they start it as a commercial venture, otherwise they wouldn't have taken VC in the first place, nevermind that VCs wouldn't have funded it if not for their pitch on how it could become a billion dollar company.
And since Sequoia? It is primarily the Zed team working full time on it, which costs money.
Who said anything about billions? I just said that it costs money to pay people to work on OSS, which is accurate as ImGui is sponsored by companies and Qt is a commercial entity with infamous licensing. VC doesn't necessarily mean billions in funding.
While unfortunate, to me this just says any user requested features aren't going to get merged anytime soon. As is, it already runs on windows/linux/mac, and will need to do so maturely for Zed to function. Therefore, to me, this isn't that big of a deal, and when they need things like web support (on their roadmap), they will then add that.
I'm curious... does anyone have any PRs or features that they feel need merging in order to use GPUI in their own projects? (other than web support)
Sadly it doesn't actually look like gpui-ce has any activity, the maintainer merged one pull request (literally, #1) and then stopped. They should've just added more community maintainers to the GPUI repo directly rather than having a fork.
I started the gpui-ce fork but I'm becoming somewhat more interested in a fresh framework that is more aligned with the rust ecosystem in general - using crates like glam/glamour, parley, palette, etc
Lots of gpui was built with build Zed/a text editor in mind directly, and as folks have mentioned here, it is hard for Zed Industries to justify work on gpui that is purely for the community. Nathan is usually pretty pragmatic around not optimizing early, and gpui is generally serving Zed's needs at the moment (from what I know, I haven't worked on Zed since July)
I do think ZI would generally benefit if gpui did get pulled out of Zed if there was a community that was passionate about taking it over... but that is time and effort in itself.
You might also want to look into Dioxus Native as it's doing a lot of what you're interested in too, with taffy and vello for example. The gaps I see in the Rust UI ecosystem as you asked are that I want a true cross platform solution for mobile, web, and desktop while most focus only on desktop, as I use Flutter currently for this purpose but need to pull in Rust crates through an FFI layer like flutter_rust_bridge, as well as a backend server in Rust and having to share types with the frontend through some agnostic format like GraphQL, so it'd be nice to have everything in one language. Dioxus Native does in fact bill itself as "Flutter but in Rust" which I'm looking forward to a lot.
How was it like working at Zed? Any reason for leaving?
I would be curious to hear about where folks are finding gaps in the rust ui ecosystem though...
I've written quite a lot of rust UI code for Zed over the past few years so I'm mostly familiar with the pros and cons of gpui, but I haven't spent much time with Iced, Dioxus, Xilem, etc.
Iced is promising, using it for a small side project. Fairly straightforward and easy to use, but lacking basic things from more mature libraries (unsurprisingly, since it's still early). If you want something like a QTreeView for example, you're on your own. It's cool that it supports WASM, though I'd call it alpha support for now.
Yet more disruption caused by coding agents, I’m sure. We saw it quite visibly with Tailwind, now I can see if code editors are maybe struggling too, especially something like Zed which was probably still used mostly by early adopter type
People, who have early adopted TUI coding agents instead.
I don't think it means they're struggling financially. I think it means they're not steering the ship alone any more, and are responsible to others. That's how accepting investment money generally works.
The thing with GPUI is that the library itself is very low level and their scope is limited (by design I suppose), the ui with components is a separate crate with GPL license, while GPUI license is Apache.
As far GPUI has a great foundation, the community can built the components themselves.
Text editors are for cleaning up after the agents, of course. And for crafting beautiful metaprompt files to be used by the agentic prompt-crafter intelligences that mind the grunt agents. And also for coding.
Iced seems really promising, however, it's a passion project by a single developer. They very clearly stated that their goal is to follow their passions and desires first, everyone else second, and that it will always be a single person project. Their readme even discourages contributions.
Companies using it in production are often forking it as a result, and trying to keep their fork in sync. Ultimately, if the community wants iced to become a major and stable framework, it will have to be forked and a community development model built around it.
And I'm not saying this to disparage the author in any way, their readme even seems to suggest that that's exactly what they'd prefer.
I'm partial to Dioxus with their native renderer coming up, it should work cross-platform on mobile, web, desktop like Flutter (except web is actually HTML and CSS, not canvas) rather than only desktop which is what most Rust GUI frameworks are targeting.
Not contesting your claim, but would you mind sharing what major hardware vendor you mean?
I love iced and wrote a decent amount of code using it, but in my mind the biggest sponsor is system76 - and as awesome as they are they aren’t a major vendor yet :)
Has System76 started designing, or more correctly outsourcing more expensive custom motherboard designs, like Lenovo and Dell or are they still selling slightly customized white-label laptops?
Not sure how the UI engine itself compares, but to me it is all about the available components (as a total non-designer, although AI helps with that now). The only choice I have at the moment that would meet my needs is gpui, as gpui-component now exists.
Switched from Intellij (various) to Cursor because of AI integration, only using Claude Code CLI, switched to VS because Cursor became so annoying every release, pushing their agents down my throat, activating what I did deactivate every release, recently thought "Why do I even use that slow bloated thing of VS?" and switched to Zed. Very happy camper. So much faster. So much snappier. Would love Claude Code CLI integration but can live without it. Would pay for Zed as I did pay ~25y for Intellij.
Are you familiar with ACP[0]? Through that protocol you can run claude code within zed[1]. Or perhaps I'm not understanding what you mean by using CC integration.
Yep. Zed is the best. It’s in an optimum spot for me. It’s super snappy and has good implementation of vim keybindings for manual coding, and it has appropriate AI integration that does all the AI stuff I want without being in my face about how AI it all is.
Yeah, I find Zed to be strictly the best experience for "I actually want a pleasant editor" experience. Using it reminds me of when I finally switched from nano to sublime as a freshman.
An interesting side effect of moving to wgpu is that in theory with some additional work, this could allow you to run Zed in a web browser similarly to how some folks run VSCode as a remote interface to the backend running on a server.
From the PR, it sounds like the switch to WGPU is only for linux. The team was reluctant to do the same for macOS/Windows since they felt their native renderer on those platforms was better and less memory intensive.
> This definitely would be worth some profiling. I don't think it's a given that their custom stacks are going to beat wgpu in a meaningful way.
They probably will for memory usage. Current wgpu seems to have a floor around ~100mb that isn't there with other rendering backends (and it was more like ~60mb with wgpu a few months / versions ago).
Not sure if this is fixable in wgpu, or do with spec compatibility (my guess would be that it's fixable, just not top priority for the team atm).
WebGPU has some surprising performance problems (although I only checked Google's Dawn library, not Rust's wgpu), and the amount of code that's pulled into the project is massive. A well-made Metal renderer which only implements the needed features will easily be 100x smaller (in terms of linecount) and most likely faster.
There is also the issue that it is designed with JavaScript and browser sandbox in mind, thus the wrong abstraction level for native graphics middleware.
I am still curious how much uptake WebGPU will end up having on Android, or if Java/Kotlin folks will keep targeting OpenGL ES.
WGPU is just a layer over the top of the native APIs on any given platform so unless Zed's DirectX/Metal renderers were particularly bad it's unlikely WGPU will be better here.
I'm not saying it would be better, I'm saying it may not be particularly much worse. Which still might make it worth simplifying everything by settling on one rendering abstraction
I don't think it would, but I don't think it's a given that their homegrown renderer is wildly more performant either - people tend to overestimate the performance of naive renderers
wgpu isn't a renderer though, it's an abstraction layer. It's honestly hard for me to imagine it ever being faster than writing directx or metal directly. It has many advantages, like that it runs in browsers and is memory safe (and in the case of dawn, has great error messages). But it's hard for it to ever be as fast as the native APIs it calls for you.
Rendering in the browser has nothing to do with being able to do remote editing like you can in VSCode - you would just be able to edit files accessible to the browser.
Just like you can hook up local VS code native up to a random server via SSH, browser rendering is just a convenience for client distribution.
You would need a full client/server editor architecture that VS code has.
> There is significant work beyond the renderer that would need to happen to run Zed in a browser - notably background tasks and filesystem/input APIs would need web/wasm-compatible implementations.
Well, not really. It means you have a renderer that is closer to being portable to web, not an editor that will run in web "with some additional work". The renderer was already modular before this PR.
I believe they're referring to running Zed entirely in a browser. This opens up possibilities like using zed for something like codepen, or embedding it into a git web frontend like gitea. Many projects like this basically embed vscode, a rare benefit of being an electron app which Zed is not.
Sure it takes very little hardware power to do this, but Zed isn't actually setup for this yet. This is in theory and after a few more API's are adapted.
Zed is my goto editor when I'm not vibe coding, but that is rare these days.
Their integration with Claude Code, etc really helped, but Antigravity completely pulled me away.
And really, since they're catering to the same basic audience, the defaults should be the same as VSCode for most stuff. VSCode but performant would be an excellent pitch for the upcoming consumer RAM deficient world.
Dunno how they plan to get wider extensibility and community support without an embedded JS backend to support the existing Code plugins. That's where the real blocker is.
Curious: whats your primary programming language and what sort of development do you do ? In my experience with LLMs agentic coding paired with a good IDE works wonders. Its also allows me to surgically write critical bits of code myself while outsourcing boilerplate stuff.
I find it odd the rust community feels the need to reimplement tried and tested APIs in "pure safe Rust". Like no other language has better C integration, and we have had cross-platform windowing libraries since like the 90's, why does everyone reach for a brand new unstable libraries with less maintainer support?
My very weak understanding is that a lot of the C/C++ libraries heavily leverage concepts like inheritance that don't map well to Rust, and so a lot of the GPU work has been "how do we actually make this an idiomatic API?" and that has required more experimentation.
AFAIK people 100% are using other libraries for UI, but often use a macro or something to force Rust to behave in a way that those libraries expect.
I haven't read about this in literally years, but that's my recollection.
Aside from Rust being better (impl is such a great decoupling, fearless type safety), there is afaik nothing one tenth as useful and good as cargo & is crate ecosystem (docs rs, crates.io, and all the packages).
I find it odd the broader hacker community feels the need to requestion and cross-examine every choice for using rust. Like, no other language has such great just works ergonomics, with a solid language, fantastic tooling, excellent packages that gives it a just works the first time cross-platform joy. Why does every thread have to spawn a brand new unsupported whinge throwing dirt at what seems like such an obvious enjoyable choice?
I think you might be misunderstanding the parent comment. It sounds to me like they're arguing in favor of wrapping C GUI library when writing a GUI app in Rust, not avoiding Rust entirely. As far as I can tell, they're arguing for writing new stuff in Rust that happens to be re-using some components that aren't in Rust. I'd argue that's entirely in the spirit of Rust; kind of the whole point is that you can put a hard boundary on where the unsafety lies and make everything safe outside of that boundary. When I use a Vec or a HashMap, there's unsafe code under the hood, but it doesn't stop me from writing my own code without needing to dip into unsafe at all, and there's no fundamental reason why the same couldn't be done by wrapping Qt or Gtk on Linux or Cocoa or MacOS.
Ah, I meant to reply to https://news.ycombinator.com/item?id=47003058. Never questioned the use of Rust, only the need for the entire windowing stack to be in Rust (that blog post shows a case where it bit them)
One could argue that Rust isn't well suited for GUI development at all, where class-based OOP really shines.
Then there is the issue that the Rust community likes to rewrite classic C programs because of "memory safety" and "modern tooling," but really just focuses on the easy 80% of the work. It feels like these rewrites are more done to gain popularity on GitHub than anything, as they most often remain incomplete and never replace the original implementation.
Finally there is the GPL to MIT licensing issue, on which much has been said already.
GUI is much more than just cross platform windowing. Which fwiw, is a mostly solved problem in Rust - there's not a bunch of reimplementation or instability. The ecosystem is solidified behind winit (*).
Also, we don't have good cross platform desktop GUI libraries in C. That's why everyone started using Electron.
I hope this can somehow improve the font situation. Even on a 1440p monitor, the fonts in Zed are much blurrier than any other editor I've used. I Can't even use bitmap fonts like VSCode.
In 2020, I started working on a (C++) game engine. Since the only decent open-source UI option was Dear ImGui (which was obviously a bad choice for consumer-facing UIs), I ended up rolling my own retained-mode UI library on top of SDL. Now, it's fully-featured enough that I rarely have to touch it. There's even a major company using it for embedded products.
I don't get why every language's community doesn't just do the same thing: roll an idiomatic UI lib on top of SDL. It was tough, but I was able to do it as a single person (who was also building an entire game engine at the same time) over the course of a couple years.
I haven't worked on screen reader support, yet. Support for alternative text input is built into SDL. UI size scaling is a feature I plan on adding eventually.
> I don't get why every language's community doesn't just do the same thing: roll an idiomatic UI lib on top of SDL.
> I haven't worked on screen reader support, yet. Support for alternative text input is built into SDL. UI size scaling is a feature I plan on adding eventually.
Well, that's why :)
For most serious applications, accessibility isn't a second thought, it's a requirement and it's very hard to implement correctly.
So the solution is to build applications around less of a common base? I don't follow the logic, with respect to Zed. I get what you mean if there's a first-party UI solution in your language (e.g. Swift), but in that case you don't need an open-source UI library.
The solution, if you want a production ready GUI, is to use a GUI toolkit which already has decent accessibility support.
There aren't that many of those: .NET, AppKit/UIKit, SwiftUI, Qt, GTK, the web, wxWidgets (which is really just GTK/AppKit/.NET), probably a couple others. So you either use the native language of one of those toolkits, or you use bindings from your language to those toolkits.
What is the debt? As a user, Zed feels more like the only IDE that isn't weighed down with debt. It's incredibly fast, responsive, stable, and it's iterated on very quickly.
That happens when you don't talk to enough users, build the wrong things, and then iterate like a good startup. It compounds
Building a chat platform in an IDE with CRDTs...? That screams we are more interested in the solutions than the problems, and that they didn't appreciate network effect before attempting this
They are talking to their users, it's not those that use Zed it's the VC firm that funds them. They seem to be implementing everything those users want.
If you only talk to the users you already have, you won't know what the users who don't use your product want. Many a project and company have peaked early for this very reason.
Will this help running Zed in environments with no GPU/old GPUs? There have been some complaints about not being able to run Zed on Ivy Bridge or in VMs, even though browsers and other applications work perfectly fine
Oh sweet! I threw out GPUI completely from one of my projects because of Blade. I needed headless with rendering to image for e2e testing and gave up on GPUI after trying to mess with Blade. It’s definitely a mess and moving to egui has only shuffled the deck chairs around.
Oh, this is nice. Latest builds stopped working on panfrost because it does not announce the sufrace capabilities or something like that. Maybe I can have it back to working on the orange pi
I don't know why Blade was decided on, but it was started by Kvark who worked on WGPU for Mozilla for some time. I assumed it would be a good library on that basis.
Is webgpu a good standard at this point? I am learning vulkan atm and 1.3 is significantly different to the previous APIs, and apparently webgpu is closer in behavior to 1.0. I am by no means an authority on the topic, I just see a lack of interest in targeting webgpu from people in game engines and scientific computing.
For a text editor it's definitely good enough if not extreme overkill.
Other then that the one big downside of WebGPU is the rigid binding model via baked BindGroup objects. This is both inflexible and slow when any sort of 'dynamism' is needed because you end up creating and destroying BindGroup objects in the hot path.
The modern Vulkan binding model is relatively fine. Your entire program has a single descriptor set containing an array of images that you reference by index. Buffers are never bound and instead referenced by device address.
Bevy engine uses wgpu and supports both native and WebGPU browser targets through it.
The WebGPU API gets you to rendering your first triangle quicker and without thinking about vendor-specific APIs and histories of their extensions. It's designed to be fully checkable in browsers, so if you mess up you generally get errors caught before they crash your GPU drivers :)
The downside is that it's the lowest common denominator, so it always lags behind what you can do directly in DX or VK. It was late to get subgroups, and now it's late to get bindless resources. When you target desktops, wgpu can cheat and expose more features that haven't landed in browsers yet, but of course that takes you back to the vendor API fragmentation.
It's a good standard if you want a sort of lowest-common-denominator that is still about a decade newer than GLES 3 / WebGL 2.
The scientific folks don't have all that much reason to upgrade from OpenGL (it still works, after all), and the games folks are often targeting even newer DX/Vulkan/Metal features that aren't supported by WebGPU yet (for example, hardware-accelerated raytracing)
Seeing that the author of Blade (kvark) isn't exactly a 3D API newbie and also worked on WebGPU I really wonder if a switch to wgpu will actually have the desired long term effect. A WebGPU implementation isn't exactly slim either, especially when all is needed is just a very small 3D API wrapper specialized for text rendering.
Cross API graphics abstractions are almost always a bad idea even if its just wrapping modern DX12 and Vulkan, and always always are when Metal comes into the mix.
Kvark was leading the engineering effort for wgpu while he was at Mozilla.
But he was doing that on his work time and did so collaborating with other Mozilla engineers, whereas AFAIK blade has been more of a personal side project.
Can someone, who knows computer graphics, explain why the old library had so many issues with flickering and black triangles or rectangles flashing on the app, and why the new library is expected to not have those same problems?
Idk, I use Zed because it's so dang smooth. On a 144hz screen it feels really nice to use, other editors like v$code and IntelliJ have noticeable stutter and I find that to be distracting, in the same way it's jarring to go back to a 50hz monitor after getting used to 144.
There’s a lot of small things you’ll hit if you use Zed where it’s a subtlety nicer design point, but one of the big ones for me is project-wide search. Zed’s multibuffers are SO much better than VS Code’s equivalent.
If I’m debugging something on a coworkers laptop, VSCode is mostly usable until I hit that.
If you’re a craftsman, it’s worth trying different tools!
Agreed, multibuffers are such a huge QOL feature. I love being able to work across a dozen or more buffers at once with no impact on performance. You can work in so many places at once, navigate from the buffer to its file and back, widen the buffer up or down, etc. It feels like a super power.
A lot of people use VSCode. Zed's value proposition is being basically that but with fully native code, so without the madness that is Electron. If you're not a fan of this kind of tooling, it's totally fine, but many people see the value in having an extensible graphical code editor
Zed us, in fact, fully native. It's top-to-bottom Rust, which gives them C++ equivalent speeds or better compiles to native code and lets them much more easily make use of multi-threading parallelism than basically any other language that compiles to a static binary. They also use a custom GUI framework built from the graphics driver's up to be maximally efficient, performance smooth and low latency; that's literally the subject of this thread!
The only reason it would be spawning Node.js processes is if it's running a javaScript/typescript language server for you, but that's not a property of Zed itself, it's something any other editor would do (including VS Code). Also, the resident memory of Zed, even with multiple entire projects with hundreds of tabs open, running several language servers and multiple terminals and AI agents for me never exceeds about 900 megabytes, which is significantly less than VS Code uses even at startup.
Whatever it was that you ran into, it's likely some kind of fluke or platform-specific bug.
My tone probably came off as antagonistic and that was not my intention. I was interested in if anyone was using the high fidelity graphical features for something other then making the environment prettier.
I am always interested in what features new editors and how people use them and such and if I am missing out.
As far as I can tell, no. I moved to zed from nvim for fast starts + better AI UX with edit prediction & agents than nvim without start time/RAM of cursor. It delivered on that, but now that I think about it my coding practices have changed so much since that decision (sitting in Claude / https://www.conductor.build) I should probably just go back to nvim!
It's not clear to me why you would want your editor to run in as many environments as possible unless you're a system administrator? Generally, most of us do our serious coding work on the major OS platforms and we would want a native editor that takes advantage of those platforms and the hardware they tend to run on maximally; if we need to edit something on some other box elsewhere, we could either use Zed's remote development system or just use MicroEmaca Nano or VI depending on which key bindings were used to.
The advantage I find personally, at least compared to something like emacs, is not just that you get high fidelity scrolling, but that the editor can open 60,000 line code files instantaneously syntax highlight all of it using trees that are and be butter smooth and responsive the entire time I'm searching through making multi-cursor edits or moving through the file. As well as being able to open for instance log files that are multi-megabytes large without having to worry about anything.
Plus, Zed has a lot of refinements and features over other editors, even if you discount the benefits of GPUI. I've spoken at length before about why I think its approach to coding agents is the best at sort of enhancing the human in the loop and keeping you in a flow state and preventing skill degradation[0], but I also think the range and design of the editing actions are better than almost all modern text editors, closer to what something like Emacs provides, and the UI is overall more streamlined and pleasant to use than something like VS Code, even though it's generally the same philosophy. There's also the collaboration features and the edit predictions.
> It's not clear to me why you would want your editor to run in as many environments as possible unless you're a system administrator?
All I really was trying to say is that one may find themselves in a more limited environment at some points, I was not so much thinking of remote editing for the reason you mentioned that most developers or even system admins(unless restricted for security reason or some other) can just remote in and most editors these days do this well. but in a situation where one may be installing their system or their graphics acceleration has broken for what ever reason and now one is without their trusty editor so although I hardly every use emacs in a tty or pty it's a fallback in case something goes wrong so I can fix it while still using my editor.
>that the editor can open 60,000 line code files instantaneously syntax highlight all of it using trees that are and be butter smooth and responsive the entire time I'm searching through making multi-cursor edits or moving through the file.
this definitely sounds interesting, emacs when dealing with very large log files and such is not always fantastic and some features become painfully slow or completely unusable .
Your other points on the AI features are interesting I have been using Aider and tried aidermacs but ended up going back to a shell buffer with some basic commands to switch back to the buffer and other features to control it, while in one of the code buffers. So will definitely look at some of the AI features when I give it a spin.
Try Sublime Text if you want lower RAM usage. My instance is currently sitting at ~120mb with 3 separate projects open (that does not include usage by Rust Analyzer which runs in a separate process (and tends to use GBs of RAM), but I suspect your numbers don't either)
> Last time I used zed for go development it spawned nodejs servers (downloaded without asking for permission!) for god knows what.
LSPs, they are snagging the LSPs made by other developers for languages you are using. if you install any LSP or language support in VSCode its running the same thing. It only installs when you are using a language that has default support such as Rust, Python (which I believe uses a Node.js LSP), Go (same as Python), etc.
Thanks. Have you used cursor or copilot (recently, tab completion has gotten better)? I'm curious how this compares in actual performance. Last time I used Zed, this was a showstopper as the completions were much worse (though if I configure it to use copilot as my source, I guess it should perform the same as VsCode?).
Personally, I don't like this autocomplete or tab-completion thing. I find it very distracting. I understand why someone might like it, but it's just not my thing.
I mostly use Claude (and Codex) through ACP in Zed. My colleagues use Cursor and VSCode, and I don't feel like I'm missing anything at all.
I came from nvim after using vim for decades. For me, you could approximate Zed with endless hours of tinkering in nvim, or I could just use Zed.
Things that keep me: fast. Easy project wide search that is fast. Easy file completion that is fast. Easy ability to add/remove line numbers from a gutter. Vi keys that... kinda mostly work. Sorta. Code collapsing that I didn't have to spend hours fidgeting with that also mostly works with Ruby (except for rescue clauses / end-of-function exception handling which collapses weirdly.)