I think I understood for quite some time what it wants to do (Though when checking the website there always creeps in doubt, because it is so incomprehensible) and every year when I download the application again, it looks a bit more cleaner, a bit easier to just use. But still, basic things always elude me. Do I really have to read the handbook to figure out how to format text in the knowledge base? Half the windows and symbols just make no sense, etc. Try pressing a button to see what it does and now everything looks different and what even happened?
It seems to glacially improve on that front and I know to really use it, I have to learn to program it, but I am also of the mind basic functionality should be self explanatory. And pharo itself as the basis of this seems so convoluted and complex, I wonder if I even want to get into this.
And then, the community seems to solely be on discord still, and that is then always the point were I bow out and wonder if cuis smalltalk or other systems with simplicity as core tenant are not much nicer to use and I should look there. Of course, in the end, I never get more than surface deep into smalltalk, because while I want the tools to build my own environment, if I need to build them first, there is always more pressing work...
But honestly, a great knowledge base and data visualization I can intuitively just use and then expand later on with my own programs sounds like a dream workspace. It's just, that it is really hard to get into at the moment. I don't know any python, but I could just use jupyter know and learn as I go, but sadly, I never get that feeling here.
That would come later and take the air out of Smalltalk business adoption as IBM and others pivoted away from Smalltalk into Java.
It is no coincidence that while Java has a C++ like syntax, its runtime semantics, the ways how JVM is designed and related dynamism, Eclipse, key frameworks on the ecosystem, and by extension the CLR, all trace back to Smalltalk environment.
https://cs.gmu.edu/~sean/stuff/java-objc.html
JavaEE was born out of an Objective-C framework Distributed Objects Everywhere, from OpenStep collaboration between Sun and NeXT,
https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere
Already there we have the Smalltalk linage that heavily influenced Objective-C design in first place.
Then how Smalltalk => Self => Strongtalk linage ended up becoming HotSpot JIT compiler on the JVM.
Finally, reading the Smalltalk-80 implementation books from Xerox PARC also shows several touch points between the designs of both VMs.
The way classes are dynamically loaded, introspection mechanisms, code reloading capabilities, sending code across the network (RMI), jar files with metadata (aka bundles), dynamic instrumentation (JMX).
The Pharo folks insist on trying to adapt to industry, and that's also the focus of a lot of their published material, though there's still an academic legacy in there.
For me the tricky thing is to find enough time to study the API:s, the programming language is easy to learn the basics of and then one has to figure out the flow in creating a package and adding code to it, but then it's a slog searching and reading the code for hundreds of classes to figure out where the thing one wants resides. On the other hand, when things break it's usually a breeze, the inspection and debugging is great, unless one does the mistake of redefining some core class and corrupts the entire image.
In which case, one does not save that image ;-)
Or if one chose to save a broken image, one goes back to a previous image and reloads all the more recent logged changes up-to and excluding the change that broke things.
https://cuis-smalltalk.github.io/TheCuisBook/The-Change-Log....
But it should be very rare, and besides the change log and similar facilities it's also easy to just make timestamped copies of the image and pushing packages to git.
So how would we explore fundamental changes that break the debugger? Would the dumbest workable thing be create subclasses without changing the originals?
I have come across someone who genuinely seemed to think that making copies of just the image was a viable approach to version control. Their project failed badly; and they were absolutely convinced the problem had been Smalltalk, when the problem was not understanding how they could use Smalltalk.
There are a lot of things I like about Smalltalk, but the parent poster is right, Python is a more practical choice. Not really because it's better-known so much as because it's procedural. Smalltalk is so all-in on object-oriented programming that it puts me in the wrong mental space for just banging out throwaway code for getting a question answered quickly. Instead I'm constantly being pulled toward all this "clean architecture clean code" thinking because that's just such a big factor in my headspace for object-oriented programming. Even if I don't succumb to it, it's still a drain on my executive function.
And then yes, agreed, building on Pharo's UI system is a problem. That's frankly something that the Smalltalk community needs to get away from across the board. It's just too clunky by modern standards. And it would take a lot to convince me to agree to adopting a Pharo-based tool like this at work, out of fear that all the non-native UI stuff would become a major source of accessibility barriers. And I don't quite understand why the Pharo community seems to be so convinced that it's a necessary part of the image-based development paradigm, when lisp has been doing images without tight coupling to a single baked-in graphical IDE for decades.
I keep thinking maybe all it needs to be is something like an extension (or alternative) to the language server protocol for exposing some of the intermediate code analysis artifacts back to the developer. And then I can happily bang on that from a Jupyter notebook.
You might like this talk! "Liberating the Smalltalk lurking in C and Unix - Stephen Kell" https://news.ycombinator.com/item?id=33130701
otoh I can see what you mean.
otoh I can see someone start "banging out throwaway code" and testing it in less than 2 minutes.
Python looks procedural, however since the new object model was introduced in 2.2 with the original approach removed in 3.0, that it is OOP to its bones, even when people don't realize it.
By contrast, Smalltalk is so deeply object-oriented that it doesn't technically even have an if statement, just instance methods in the boolean class.
The practical importance is that we use the same tools to search for implementers and senders of #ifTrue: as we use to search for implementers and senders of any other method. (We use the same pencil to sketch that we use to write.)
In part because it's much easier to boot a fresh image and start hacking than some python3 -m venv incantation that sometimes breaks or breaks something else. There's a lack of libraries though, and now it might be easy to just point the image to a remote git repo to import it but I'm not sure, if it isn't other languages has it easier. At least when you can just copy the algorithm into a file and put the right formula at the top and start using it.
As for:
> Yet none of remarkable applications built with it except the tool itself.
The same is true of Smalltalk in general, and of Lisp, and some other technologies. Lack of wide adoption and large amount of success stories is, alone, not a proof the idea/technology is fundamentally bad. The choices in our industry are strongly path-dependent, driven primarily by popularity contests and first-mover advantage. This dynamic is famously known as "Worse is Better"[5].
What the original essays didn't account for, however, is that whatever gets moderately successful today, becomes a building block for more software tomorrow. As we stack up layers of software, the lower layers become almost completely frozen in place (changing them risks breaking too many things). "Worse is Better" sounds fine on the surface, but when you start stacking layers of "worse" on top of each other, you get the sorry state of modern software :).
So yeah, those ideas may not fit the industry today, but it's worth keeping them in mind as a reference, and try to move towards them, however slowly, to try and gradually improve how software is made.
---
[0] - I write about that regularly; look up my comments with the phrase plaintext "single source of truth"[1] for some entry points, like [2] or [3].
TL;DR: use of such "contextual tools" should become the way we build software. We need to have environments that come packed with various interactive "lenses" through which you can view and edit the common codebase using whatever representation and modality (textual, graphical, or something else) is most convenient for the thing you're doing this minute, switching and combining them as needed[4]. What we consider a codebase now - plaintext files we collaborate on directly - needs to evolve towards being a serialization format, and we shouldn't need to look at it directly, not any more often than today we look at binary object files compilers spit out.
[1] - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] - https://news.ycombinator.com/item?id=42778917
[3] - https://news.ycombinator.com/item?id=39428928
[4] - And saving the combinations and creating new tools easily, too. Perspectives you use to understand and shape your project are as much a part of it as the deployable result itself.
> in fact, I think this isn't going far enough I would be more than curious to learn more about how you see this space :)
I am not sure what you mean by "people with poor human qualities". I am particularly involved in GT and less in Pharo since many years. GT is based on Pharo, but it comes with its own environment and philosophy to support a goal that is not about Smalltalk. We do encourage you to join and see our community especially if you are interested to learn a different kind of programming with us. I am yet to find people with poor human qualities there.
As for funding, we sustain GT through the work we do at feenk where we solve hard problems in organizations that depend on software systems.
And we do not claim that GT is the best. Only that it's the first to show a different possibility to how systems can be made explainable.
What due diligence are you talking about? Personal research? (Genuine curiosity)
Here it's all one system, and thinking of the image as a key-value store feels quite natural too. Finally, the UI with panes that go right also feels natural and looks quite slick. I wonder if it's easy to switch between languages? Like can the key-value store pass data to a python program, or use an Apache arrow table?
A few notes: the moving from left to right allows for a dynamic exploration which is different from the typical defined exploration from a notebook. In Glamorous Toolkit we consider that both are important and complementary.
The dynamic exploration is enabled by the tools following the context. For example, the views in the inspector appear when you get to an object that has those views. You do not call these views by name. Also, choosing a different view allows you to change the course of the exploration midstream. Furthermore, you can create a view right in place, too.
The exploration possibilities are visible, but there are more pieces that are less visible that make the environment interesting. For example, there is a whole language workbench underneath and a highly flexible editor that can also be contextualized programmatically.
If you do give it a try, please let us know what you think of it.
You do not have to read the handbook to format the text. You can use Markdown in a text snippet :). This gives you a compressed overview: https://book.gtoolkit.com/understanding-lepiter-in-7--6n7q1o...
> I know to really use it, I have to learn to program it, but I am also of the mind basic functionality should be self explanatory. And pharo itself as the basis of this seems so convoluted and complex...
We use Pharo as a programming language for building the system, and most extensions are expected to be written in it. It's possible to connect to other runtimes, like Python or JS, and extend the object inspector that works with remote objects using those languages. But overall, learning Pharo is a bit of a prerequisite. I certainly understand that it can appear foreign, but convoluted and complex are not an attributes I would associate with it :).
Now, in GT, the environment is built from the ground up anew and it's different from classic interfaces found in Pharo or Cuis. And of course, it's different from typical development environment, too, because we wanted to build a different kind of interface in which visualization is a first class entity.
Our community is indeed on Discord a lot, but we also host discussions on our GitHub repository: https://github.com/feenkcom/gtoolkit/discussions
In any case, I am happy you find the need for "a great knowledge base and data visualization" relevant and useful.
The problem we address is how to understand a situation about a software system without relying on reading code.
Reading code is the most expensive activity today in software engineering. We read code because we want to understand enough to decide how to change the system. The activity is actually decision making. Reading is just the way we extract information from the system. Where there is a way, there can be others.
In our case, we say that for every question we have about the system it's possible to create tools that give you the answer. Perhaps a question is why not use standard tools? As systems are highly contextual, we cannot predict the specific problems so standard tools will always require extra manual work. The alternative is to build the tools during development, after you know the problem. For this to be practical, the creation of the tools must be inexpensive, and this is what GT shows to be possible.
This might sound theoretical, but it turns out that we can apply it to various kinds of problems: browsing and making sense of an API that has no documentation, finding dependencies in code, applying transformations over a larger code base, exploring observability logs and so on.
Does this help in any way?
The interesting bit about a test is that it's inexpensive to create and you can create it within the context of your system after you know the problem. You do not download it from the web. You create it in context and then, when even a single test fails you might stop and fix that one. Why? Because it reveals something you consider valuable.
Now, tests answer functional questions, but the same idea can be applied to any other kinds of analyses. The key is to have them created within the system. If you download an analysis from the web, they will be solving some problem, just not yours, so it will not look interesting.
I have been thinking about my own experience trying to learn Pharo and GT and came to the conclusion that, because of the nature of smalltalk, written form of teaching materials are not effective and in fact even painful to learn from. Nothing wrong with the smalltalk approach of computing, such as GUI-centric and image-based environment. They are what makes it so interesting and an immersive development environment. But video tutorials and live-session hand-holding are what's needed to teach these environments because of the highly interactive nature of smalltalk. The Pharo MOOC exists, but that requires the type of academic-level time and mental commitment of back when I was in school. And as a hobbyist, I have less-demanding options for learning that are also interesting so I end up pausing my efforts to learn Pharo/GT.
It's a tough situation for smalltalk proponents because interactive instruction material are very costly to produce and maintain. And the smalltalk communities are much smaller and they have don't massive corporate sponsors. Even cheaply-made YouTube videos take time and effort, and I am grateful for those who make them out of their enthusiasm for the technology!. But I'm afraid I've been conditioned to watch slick, engaging video content with clear, well-paced voice tracks and accurate captioning.
I do wonder if the smalltalk community could benefit from a beginner-friendly, simplified version of Pharo UI that starts up in a Jupyter notebook interface and expose only limited tooling, to give the learner a taste of what's possible and has some guardrails to prevent the user getting lost. Gradually revealing the Pharo/GT features that way would keep the learner engaged and motivated. Because of the above-mentioned challenges with producing teaching content, self-guided interactive learning tools would be the best bang-for-buck, I think. I thought the Elixir language manual was excellent and it was the first language reference doc I actually enjoyed reading! (Until it got to the string handling... then I ran out of attention span, lol) Elixir also have Livebook.dev which gives notebook interface. Could be a good inspiration.
Another possibly dumb idea I had was that maybe smalltalk is an ideal companion to current LLM tool/function calling APIs, where an LLM can "guide" a live smalltalk environment for developing an application through an API. Since a smalltalk environment is always running, it can also (maybe) feed relevant live state context back to the LLM with some feedback prompts... I suppose a smalltalk envrion can serve as a sort of memory for LLM as well as an agent for modifying the smalltalk environ?
Sorry, didn't mean for this to sound like "you must do this for free for my mild interest in your passion project!" This has been more of a stream-of-consciousness spillage onto this forum because Grumbledour's excellent comment resonated with me. :) And the mention of notebook interface clicked in my head.
Anyway, sorry for ranting, and thank you GT/Pahro team for making something fascinating! Stuff like this is what keeps me in the technology field instead of totally leaving it out of frustration with the where tech meets business!
It explains it right on the site: "To learn how to program it, first learn how to learn inside the environment." /s
Judging from the comments and such interesting projects languishing in obscurity.. smalltalk / pharo[2] still has a PR problem even though I think a lot of people are kind of fascinated by the ideas of image-based persistence[3]. The typical easy comparisons to VMs, IDEs, and notebooks all seem to fail to capture an important part of the essence. Hence the need for new vocabulary like "moldable development" and "contextual micro tools" which is all part of the appeal and part of the problem. It really is a different kind of thing.
I (still) hope it all catches on a bit more but my sense is that it probably needs to present itself as a bit less academic. Compare the moose touting of "meta-meta-modeling" with something like gritql[4], which focuses more on use-cases, and seems to be gaining in popularity rather than remaining obscure. Seems like maybe it's time for a change in tactics to bring in a wider audience.
[1] https://en.wikipedia.org/wiki/Moose_(analysis) [2] https://en.wikipedia.org/wiki/Pharo [3] https://en.wikipedia.org/wiki/Smalltalk#Image-based_persiste... [4] https://github.com/getgrit/gritql
Moldable Development is different from programming in Smalltalk. It's not a way of advertising Smalltalk, it's a new way of programming :)
Moose indeed came from academia and was focused on various kinds of analyses. Glamorous Toolkit does include a small part of it, but it is a whole environment.
Thank you for the suggestion related to use cases. Today, people tend to look for specific tools that can be used out of the box for a specific set of problem. Something like gritql fits in that expectation well.
The challenge we face is that Moldable Development is generic and it is applicable in a wide range of scenarios. For example, the video on the front page tries to provide an idea of classes of use cases. The GT book offers many more such case studies. All of these are accommodated uniformly in the same environment with small energy. This is novel and requires awareness which leads to a catch-22 problem.
We try to address this also coming from the larger problem at https://moldabledevelopment.com, including a new book I am writing with Simon Wardley.
We would be interested in suggestions for how to communicate it better.
It's just so incredibly hard to remember that "beginner mind", that you really have to see it in action to understand just how much is in your brain, that you don't even realize.
That exercise will probably help your book more than anything.
I very much agree that it is hard to remember the beginner's mind. I also find that a larger challenge is that there isn't a single beginner's mind, but many. When the space is large and there are many paths, people can come from different directions. Finding commonalities seems to be significantly more difficult in that situation.
But this still leaves the problem of how to explain it before people work with the environment. The conversations in posts like these are very helpful, for example. I'd be interested in other suggestions that could help with that.
For an example, this guest was particularly clever and charming:
https://www.youtube.com/watch?v=FJP2zkl_44o
J/k. It's me. I'm the guest. :)
BUT: this was our talk after he saw my livestream where I was stumbling around trying to do things in GT early on, so it might help other people.
The ability to fix things on the fly and have the whole system as your ide/debugger is nice, but that's not very practical for the end user.
The primary goal of Glamorous Toolkit is not to be a technology for expressing systems in. Its purpose is to be an environment with which to work with systems written in various technologies.
The differentiator to other environments is that here you can create contextual experiences (made out of micro tools) while you develop.
That said, of course, the first language targeted is the main language in which the environment is built: Smalltalk. And you can build interesting systems with Smalltalk today.
The end user does not have to see the development tools. You can just fine create web applications.
Collaboration is handled through Git (all sources of GT are on GitHub).
You mention TDD. Moldable Development is complementary and can be quite interesting to explore :). And more importantly, you can use GT to employ Moldable Development in your system development.
The way Java IDEs work on their virtual project filesystem mimics a Smalltalk image for example.
SO: How do you give this knowledge to new folks? Somewhere on the site I saw a treemap (a large rectangle, divided into hundreds of smaller rectangles, in various colors, grouped into larger rectangles...) which was supposed to be... helpful? Prideful (look at what all we've built!)? What I get out of it is a scary nightmare - you'll never find what you want here! There's just too much!
And watching one of the demo videos, where the person is showing how they analyzed and visualized some react code, and dependencies, etc., it was always, "Oh, we want to see it this other way, shazam! we have an awesomely excellent tool that just makes that pop right out! But only if you know the arcane sequence of calls/invisible GUI actions to get to it.
I used the (slightly ridiculous) example of the Lisp function above, to illustrate something. I REALLY REALLY REALLY wish no one had EVER used the phrase, "Intuitive!" about their tool or user interface. There is nothing intuitive about your UI. But! But! But! The Mac! 1984! I had a Mac in 1984, I'm typing on one now (after a decade of wandering in that other wilderness :-) and I'm here to tell you the Mac UI used to be (and still largely is) discoverable.
That's the magic sauce.
It's great that you know the 12,011 functions and tools inside GT. I'm sure you'll come up with some nice trip that takes a user through some tiny percentage of those, and then dumps them out at the end in the middle of a rather large city, where they're now supposed to know where every building is, and what's in those buildings. And speaking from experience, it's wildly frustrating to have a few tools at hand, none of which really help much in some new problem, and no freaking clue how to find relevant stuff, and just wind up grinding through a lot of code in a language you're not really fluent in, and generally giving up on the whole mess and going back to the tools you know.
For one small example: someone wants to do something with a spreadsheet file someone gave them, and can't find anything about "spreadsheet" in the docs... because they don't know to save the spreadsheet in CSV format, and there are tools that work with that.
As someone else mentioned, perhaps having an LLM set up with all the stuff to be a guide might help.
Lisp and Smalltalk aren't written languages, they're oral traditions (yeah yeah yeah, Little Lisper, SICP, but those are just starters, and not for the Lisp dialect you're using anyway). Videos may be a start, but... man, I don't envy your task.
Good luck!
I am not sure where to start. So, let me start with what we do not claim: intuitive. In fact, I'd be the first to say that our proposition is counter intuitive. Intuition is based on what one is used to and we propose a rather different approach.
People expect easy to use clicking tools. We propose an environment with which you should build the tools you click on. When you refer to the 12,011 functions and tools inside GT, you assume one has to learn them by heart. We suggest that if you learn how to navigate the environment you can bootstrap your way to discovering what exists. Indeed, inside the environment there are literally thousands of tools. The treemap visualization you mention was not for bragging, but a validation: we created thousands of tools in the process of developing the environment itself inside the environment. We built those for engineering purposes and they show that the idea of Moldable Development of programming through thousands of contextual tools per system actually works.
I understand that it's not trivial, and that it can be downright frustrating. But we have seen it working with people that wanted to learn. So, we know it works, but we still have to find a way to communicate it. We seem to have a way to go to find that way of explaining. Until then, we created an environment with extensive examples and a technical book inside. Now we are writing a less technical book about the overall problems it addresses and exploring ways to explain it through videos.
There are basically about 5 different kinds of things you can build: tools/apps, libraries, frameworks, services, languages. I'll hazard a guess that most engineers are used to slicing stuff up this way and that much confusion results from arranging docs in terms of case-studies, which is maybe more appealing to management and/or academics? From the perspective of engineers.. "use cases" are expected to be presented as a small bulleted list or similar. Paragraphs are a losing proposition here, because if they want more, then they are ready for a really concrete and runnable tutorial, and almost anything in between will annoy them.
In terms of docs, it's very tempting for inventors but almost always a mistake to add "paradigm" to the main 5, even if it's true. Academics want that up front, but for industry where results beat methodology, paradigms are a "show but don't tell" thing that you have to lead people into very slowly if you even need to address it directly at all. You can break that rule at your peril and capture interest from a few hardcore alpha nerds.. but even then it is best in the context of a well-designed and otherwise practical tutorial in a optional "post-mortem" section dedicated to recap/discussion. Don't mix it with the main tutorial, and always be as ruthless as possible about separating philosophy and pragma. (I see you're on the way, and there's already a separate domain.)
Building stuff that's in several of the 5 categories at once is often the most productive work and by far the most fun, but also the hardest to present. You can't show more than 1 aspect of the big 5 at once, or you risk losing people. And you always have to be really clear about which one you're presenting for consideration. If you can lay things out like that and then cross-reference docs between the tool/language/framework aspects, it enriches the content and makes it feel comprehensive and complete. More importantly though: it gives people that are in the wrong spot some ability to course correct and that reduces frustration that would otherwise make them complain/turn away.
Books and videos are probably for people who are already true believers. Unfortunately getting more people into that category faster actually means making them say "wow!" in ~15 seconds. That's enough time for a 1-liner, a 15 line program, or a gif. Sad but true. The book's mentions python, d3, tons of popular stuff that is a good hook.. but most people probably don't click in, and if they do the level of detail/runnability doesn't feel quite right. I suggest polishing like 3 of those and lifting them to the main page, maybe also involving docker to make a quicker quick-start.
HTH. Explainability and debugging generally are both about to get a LOT more important for obvious reasons the way things are going. And documenting cool stuff is way harder than building it sometimes ;) I'm thinking a lot about how to do these things myself so I'd really welcome drive-by feedback from anyone else if my assumptions about "average impatient engineer" are way off base here.
Would you by any chance be interested to get a tour from me? At the very least you'd get to see what it is, and perhaps in the process you might find ways to explain it.
If you are up for it, please connect with me in some way (the GT Discord, @girba on LinkedIn, @girba on X, @tudorgirba.com on Bluesky) and we'll take it from there.
(In case you wonder, the answers I saw are: (1) semantics-aware code search/replace with no IDE requirement; or maybe better "grep" with no false positives (2) write queries on command line or in file, and pass them to tool; (3) very low: install the binary, there no need to "import" or "setup project" or any things like those).
Compared to that into, Glamorous Toolkit presets itself much worse. Here were my thoughts when I opened the website: It seems to be some sort of data explorer tool, but it's also kinda weird. For example, the API explorer clearly shows post-processed data, so it's not actually exploring Github API, but rather some sort of binding to it (GhRepo according to the title).. so what's the use of API explorer which requires bindings to be written first? DevOps explorer seems interesting, but I don't care about Jenkins, so what I really want to see is how hard is it to teach it about new system. Maybe it's in the videos, but I am not going to watch long videos unless I am already interested in the tool. Maybe if I click around? Nope, and the blog is not very helpful too... In fact, the comments for the post were much more informative than the website, I love HN!
So, apparently the answers are (1) it can visualize the data, if I am willing to learn smalltalk (2) it's smalltalk, so you create smalltalk classes, and they become stuck in the "image" that you cannot easily share with others nor use with any existing workflows and (3) it's probably a few hours of youtube (the first video alone is 45 minutes) + experimenting before I can get any useful output.
I don't see it ever catching at all, sorry.
Granted, it seems like a general problem with Smalltalk: the collaboration story is bad. It seems every Smalltalk user lives in their own little world, and sharing stuff with other people is an afterthought, at best. Just compare gritql's and GT's homepage: one starts with 3 copy-pasteable commands which would immediately show something cool, another starts with mysterious "Download" button followed by 45 minute video.
Because it's really not any one thing other than an environment that is built from the ground up for building highly explainable systems, including itself. Think about it like a "meta-tool", or a tool for building tools. Similar to how an operating system is a piece of software used for writing other pieces of software easier.
So naturally this type of workflow lends itself to data analysis. However it's no less applicable in building p2p networks, or working with codebases in other languages.
Regarding sharing code, it's actually really straight forward. Your classes aren't stuck in images, but are normally stored as normal plaintext files and committed into git. The library story is arguable better than in most other languages, because of how flexible smalltalk is.
A question: Why would you say that the Download button is mysterious? I think I might have missed the issue.
Still we now changed the "Download" button to "Get started" that links to a page with more concrete steps. Is this improving the situation from your point of view?
The worry about the enhancements being trapped is interesting. All of GT sources are in Git and extensions can be packaged next to your own project. We will try to address it explicitly in the main page. Would you say that a Frequently Asked Questions section would help with this?
But the more interesting bit comes from how you can extend the environment to make it fit the context of your problem. For example, a prominent way to extend is by creating contextual inspector views for different objects.
When taken as a whole, we end up with a new kind of a development experience that is centered around contextualizing the environment as part of development. It turns out that this is applicable to a wide range of problems.
For example, the talk video from the page shows examples from several domains: - it starts by browsing the file system of a React project and then querying the external dependencies from a package.json - then it goes into static analysis of React components and their dependencies - then it shows how to explore the data of a REST API and document its usage - then it goes to work with GraphQL and show how to combine it with imperative code to explore data; and even here, we go a step deeper and answer the question of how the tool worked (i.e., what query was sent to the server when we do not specify pagination completely) - then it shows how we can work with Python by extending the inspector with Python code - then it shows how we can also change the editing experience as well and make it contextual - then it shows how we can document a domain model through executable examples that when combined with contextual views become documentation - then it shows how to work with the knowledge base and even post live to a social media from it through a dedicated snippet - then it shows how we can explain a Docker setup and how the commands were derived from templates - then it shows again social media interactions but this time by browsing posts in inspectors and querying the feed live - and finally it shows how we can have a dedicated editor for configurations defined in JSON that know how to highlight, complete and navigate based on the schema information
Now, these are not features; they are just some ways you can use the environment for without the need for switching. The book from the environment shows even more such examples for inspiration. Each of these might look similar to some tool somewhere, but the possibility to have all of them in the same environment made out of the same pieces combined in many ways is the differentiator.
Does this help in any way?
My main reservation, at least initially, was that the website comes across too focused on the paradigm for its own sake. Hence my comment - it feels like a PhD project trying to project complexity and impressiveness. Personally I would have appreciated more focus on what it is, and why its useful, not an abstract framing of "moldable development". I hope that's not too harsh - just wanted to give my 2c honestly.
The problem is that for any given problem, there might be some solution out there that does something like it. Except that there is no solution anywhere in which you can do all of these. So, we are showing Moldable Development because that is the goal and that is the proposition of the technology. I believe that masking it as something else would not be particularly useful.
We understand that at this point in time, most people do not look for Moldable Development. Most people will consider it once others have taken it and create large value with it. So, right now, we are interested in finding those people because they will convince everyone else :).
If you want to learn more about Moldable Development, take a look at the book I am writing in the open with Simon Wardley: https://medium.com/feenk/rewilding-software-engineering-900c...
The goal was to explore the idea of contextualizing our tools for each development. It is built in Smalltalk (and Rust underneath) because Smalltalk already has the possibility of a live environment that can be changed from within. This allowed us to explore the space at much lower cost. Glamorous Toolkit is the result of that.
Thank you for the kind clarification. Indeed, it describes my intent quite well. Thank you for taking the time to describe how it is perceived from your point of view. I hope this is useful for the initial recipient as well.
We chose Smalltalk because we could not have discovered the space (what we now call Moldable Development) within the constraints we had.
p.s. I learnt in the meantime that it's possible to downvote somehow (I still do not know how). But in this case, I actually upvoted you.
For those with more experience, is it still relevant? Can the same be accomplished with python and jupyter notebooks?
The idea of Glamorous Toolkit is that it’s a collection of tools you use to solve software problems by making little explanatory tools. You might start out with some bigger problem like “I need to make this service fast”, come up with a question like “what does the flow of data I care about look like through the service?” and then try to answer that question by making tools that could analyze/visualize logging output or a stack trace or whatever makes sense in your software’s context.
The technique of “making little tools that explain to help answer a question” is Moldable Development, similar to how Test Driven Development is “make a failing big feature test loop, write little tests and make them pass until the big one passes”.
You can make little tools to explain away questions you have while you’re working with plugins or shell scripts or whatever you’re comfortable with and that’s “Moldable Development”. The Glamorous Toolkit just happens to be a nice system of tools that make it easy to make more little tools to help explain away problems.
Hope that helps! Lmk if you want to see some examples.
Source and bias: I worked closely with the developers before I had to take a break from work and computer stuff.
So I'm a believer in the principles. But I'm also curious about throwaway743950's question. What are the things in the Glamorous Toolkit that concretely make it better for this style of programming than traditional tools? You say "[it] just happens to be a nice system of tools that make it easy to make more little tools", but that's got to be downplaying it. Switching environments is an agonizingly costly thing to do. What rewards await those who make the jump? Rubies? Emeralds? Custom views-with-nested-sub-custom-views? Curious (but not yet won over) readers want to know.
This means your tools and visualizations are just a specific context-specific view of your objects. Meaning you aren't limited in how these tools can interact with said objects, because you are never working with static data, it's always with the actual objects.
It's hard to put into words, but it's similar to the difference between println debugging and a lisp repl or smalltalk debugger. They technically do the same thing but the actual implementation of them makes a world of difference.
Because if it wasn't for the fact the graphical stack was implemented as smalltalk objects, you couldn't build tools like the driller or debugger since they would have to be implemented as a secondary piece of software that loses the original context.
Like for example, I built a custom tool for myself when I was working on this p2p network and had a section of the codebase with some non obvious control flow, since it was handling multiple different p2p networks at the same time. Normally this is where you include a diagram in the docs, but in about an hour I built a custom code editor for the class, that visualized all the control flow and explained the cases in a flow diagram by simply introspecting on the methods defined in the class. And this tool never fell out of sync like a static diagram, since it wasn't hardcoded by me. And from that point on, I worked within this tool whenever hanlding anything related to this.
And fwiw, the python story is pretty seamless from my usage of it a few months ago. I was able to integrate and use python libraries into this project without much hassle.
Also, GT is now also a distributed Smalltalk system, too. We use it in productive settings to compute large jobs on large data sets :)
It feels so open ended that I wouldn’t know where to start. And I’ve actually spent several hours exploring Glamorous Toolkit!
There are quite a number of videos and explanations now, but we are still struggling to package them in a way that seems more approachable.
We would need help with this. If you are interested, I would offer to have a session with you that we record and in which we go through your questions and I provide live explanations. Join us on Discord and we take it from there: https://discord.gg/FTJr9gP
Or look at: https://book.gtoolkit.com/working-with-the-postgresql-relati... (in the environment you can load the code which comes with live documentation)
Normal documentation pages on a website would be a good place to start. Don't bury them in a tool I have to download and fumble through
The documentation is available online at book.gtoolkit.com (linked from the menu of gtoolkit.com). Would you see ways to improve that visibility?
When consumed in the environment, that book contains live snippets that can be explored.
We already have industry standards for doing this. Why would I want to build some micro-tool/throw-away code to do what another tool does much better and battle tested?
2. All the tools work together
3. All the tools are tracked in a central repository
I can achieve the same with unix philosophy, using the tools and languages I already know.
Indeed, it is possible to build tools elsewhere. The question is: do you build them, and if yes, when?
What we show with GT is that it is possible to build such tools for every development problem. This leads to thousands of micro tools per system that should co-exist.
GT is not a large tool to rule them all. In the Unix analogy, it is Unix, not one of the tools :).
This still leaves the question of why would want to build those tools when there are standard tools already? Because systems are highly contextual. This means we can predict classes of problems but not specific ones, which then means that any clicking tool built before the problem is known will not be addressing the specificity of that problem.
This is actually not that new of an idea. Testing is already done like that. We do not download tests from the web and run them on our system. We develop them as part of development after we know the problem. It's that contextualization that makes us stop every time a single test fails as we know that each of them captures something that our system specifically cares about.
Now, a test is a tool. We can extend the same idea to any other tool.
Does this address the question?
Being married to a specific tool like GT is limiting. GT doesn't work with most industry languages _today_, even though _in theory_ it could. It's written and scripted in a language few use, which makes it unapproachable
More seriously, thank you for sparring with me.
GT is free and open-source. It's extensive. It comes with documentation, too. We even document the practices and the process, too. With public case studies. With peered reviewed publications. And we even bet our own livelihood that it works for tackling hard problems in significant systems that others cannot tackle.
So, yes, we are not just claiming that the problem exists. We have seen it validated first-hand over a large period of time (15+ years) so we are reporting on it :).
This experience points to the idea that decreasing the cost of creating a tool is much more important than the tools that exist out of the box.
Regarding the support for other languages, it's true that we only have analysis support for a couple of dozen languages. But creating the support for a new one is often measured in days. For example, it took a couple of weeks to add COBOL to the set. I challenge you to find even one properly working open-source parser (we looked and could not really found one). In GT you can find a whole free and open-source infrastructure :).
GT is certainly not a panacea. It's a documentation of how the approach can work. I am not aware of any other environment in which tools can be built in minutes and in which thousands of them practically co-exists. If this appeals to people, and it does appeal to some, now they have a vehicle to practice with. And for those that choose to not do that, that's Ok as well :).
4. New contextual tools can be built inexpensively :)
I think the idea is good, but it's a tough sell for working programmers because the whole culture of it is so foreign. I think there's a version of GT that would do well if it described itself in terms of paradigms working coders know (POSIX, IDEs, blub-y languages, text files) instead of in terms of SmallTalk. Maybe something nearly as cool can be done as a VSCode plugin? I personally think of it as kind of a supercharged Emacs for SmallTalkers.
The first goal was to help us explore how far can the idea of contextual tools go. It helped discover what today we call Moldable Development. It is also the first extensive case study of Moldable Development, itself offering 5+K contextual tools that we used to develop the environment itself. And when we work on a system, we build thousands more.
That said, now that we know what Moldable Development is, it can be copied. We want people to copy it. Our worry though is that we want people to copy everything, not only the visible parts.
For example, I understand the Emacs parallel. But think of this: while Emacs can be extended, how many extensions do you actually use that are specific to your system? We literally use thousands. Per system. That quantitative difference leads to a qualitative difference and it's made possible because of the totality of the environment.
So are there plans to copy GT/moldable development somewhere outside of Pharo/Smalltalk?
In the meantime, consider GT as an extensive blueprint of what's possible. If there is one thing we learnt is that the technology is the smallest investment. The real investment is in learning how to exploit the idea of contextual tools for solving hard problems. That's what takes the longest, but the difference to how those problems are approached today can be measured in orders of magnitude.
https://news.ycombinator.com/item?id=33267518
https://news.ycombinator.com/item?id=23569799
https://news.ycombinator.com/item?id=42987951
https://news.ycombinator.com/item?id=23235120
It's something I've been considering for my current project:
https://github.com/WillAdams/gcodepreview
as I reach the limits of Literate Programming and so forth, but not convinced that the added overhead will pay off. Does anyone have a before-and-after of a project where this has been really useful? Bonus points if in Python.
I'd be interested to learn more about the kinds of things you'd expect to get from using Glamorous Toolkit for gcodepreview.
The way I see it, one path would be to make the rendering happen directly in GT (perhaps through an external texture). This would then allow you to see the rendering as views in various Python objects.
I actually need a data visualization tool at my day job, so I've installed it there, and begun working through the documentation/tutorials --- hopefully it will facilitate my wrapping my mind around a better solution for a recursive description of systems at work _and_ making a graphical interactive version available to co-workers --- if that works out, I'll try it out on my personal project.
I would be happy to learn more about the specific need you see in your work context. Either here or on our Discord.
If you are interested to learn more about kinds of problems and how it fits in the development environment, perhaps take a look at the book I am writing in the open with Simon Wardley (we recently released the first four chapters): https://medium.com/feenk/rewilding-software-engineering-900c...
Another recent screed on programming in this vein is John Ousterhout's:
https://www.goodreads.com/book/show/39996759-a-philosophy-of...
https://news.ycombinator.com/item?id=31248641
which similarly, walks up to, but doesn't quite come out in favour of Literate Programming.
EDIT: That said, it is framed as LP in at least one description: https://futureofcoding.org/catalog/smalltalk.html
Moldable Development is different from Literate Programming in at least two ways: - Literate Programming is focused on a single narrative. In contrast, a system can be seen in many ways. - Literate Programming has a fixed environment, and mixes text and code. In contrast, Moldable Development emphasizes the importance of the tool as an essential artifact through which to explain the system.
While Literate Programming does talk about explaining systems, I find it too limiting. Instead, we start from the observation that figuring systems out to know what to do next is the single most expensive activity in software engineering; thus it's more interesting to regard software engineering as a decision making activity about an ever changing system. This stands in contrast to the pervasive idea of software engineering as a construction activity. Optimizing for decision making leads to a new set of practices that are not an increment compared to what those typical used today.
Glamorous Toolkit looks odd exactly because we optimized for something else that others disregard.
I guess what I want is something like to:
- LyX --- where there is a focus on structure and tagging, and an ability to output a correctly typeset document
- pyspread --- where it is possible to work with data and calculations and arrive at results, including graphical ones (my current issue is that making a DXF and an SVG use different coordinate systems --- DXF is simpler, and uses the same one as for G-code, so that's what I use)
where it is possible to embed the spreadsheet cells/sheets dynamically as one used to use OLE to put a spreadsheet in a word document.
But ask yourself: how many things have been expressed this way? For example, what about security? Business domain documentation? Performance? Architecture? Discovering APIs? etc
It's not that the idea is not useful. It's that it's incomplete. We should not get stuck on it.
Besides that.. increasingly devs themselves are very commercial and not exactly in it for the love of the game. They are actively hostile towards stuff that isn't pushed on them by business, and not very interested in creative activity that pushes the bounds of the possible. I think you can see some of this in the insistence on "it's just a notebook" comparisons here, but before that.. docker was also "just another VM" to most until it was absolutely too big to ignore. It's more than comparing to what you know, it's almost actively refusing to be curious / interested. So maybe it's burnout from unnecessary churn in tech, or maybe people just resist entertaining the idea that interesting new ideas are even possible until it's pretty directly affecting their ability to be hired. Maybe both.
I enjoy your comparison with Docker. Indeed, the comparison to what you know is inevitable and it works quite well for incremental news. It works less well for new. But it's still on us, the authors, to try to find ways to communicate differently to appeal to a larger audience especially as our goal is to educate. I am also of the opinion that the most interesting path is to get someone to create outsized value that cannot be ignored. Our current focus is to find those initial someones :).
Our latest attempt to explain why Glamorous Toolkit exists comes in a book I am writing with Simon Wardley in the open about Moldable Development. See it here: https://medium.com/feenk/rewilding-software-engineering-900c...
If you happen to have the time to look at it, I would be very much interested in feedback. In particular, does the environment makes more sense in the context of the problem described in the book?
If you had an implementation of ggplot style plotting native to Glamorous Toolkit that would definitely catch my attention, though. Nowadays I typically tie together Python, R, and other languages with Emacs being my sort of control center.
I get the impression Glamorous Toolkit could step in here but it seems like such a lift to reach feature parity with a set of tools and GTK doesn't seem to integrate as well with other stuff as Emacs does, partly because Emacs just accepts that throwing around a bunch of text is 80% of what I want.
I'm a scientist/data scientist.
The GUI possibilities of Emacs are a bit limited though. Glamorous Toolkit looks better in that respect. I need to actually try it!
Something possibly similar is Studio: https://github.com/studio/studio - also Smalltalk. But seemingly dead (or at least sleeping). I encountered this via its author's RaptorJIT project, a fork of LuaJIT apparently intended to turn it into something maintainable by mere mortals: https://github.com/lukego/blog/issues/19 - some videos here: https://www.youtube.com/@lukego/streams
How you create contextual tools is secondary. Emacs' ability to create extensions is certainly interesting. But here are some questions to consider: - how many contextual tools do you have for your system? 10s, 100s, 1000s? - or, for how many and what kind of questions don't you have contextual tools? and why is that?
At the extreme, when practicing Moldable Development, we tend to address dozens of questions per day per developer through contextual tools :)
Startup types used to say that its not enough to be better than the current solution - you have to be 100 times better to justify the switching cost, and with LLMs it seems like that is like 1000x. It sucks.
Still, even assuming this is correct (which is not yet anywhere close to being certain), as long as there will be humans deciding what goes into production, decision making will be the bottleneck to address. If people rely on reading, it's too slow. Way too slow. If people only look at the system from outside, they will be making uninformed decisions.
Moldable Development offers a different option :)
I find myself wanting to talk about this in more detail, but this account is anonymous - could I email you?
Or contact me on social media.
ggplot and Grammar of Graphics are very interesting indeed. With GT we have some support, but we focused so far on creating the underlying graphical stack first, one in which everything (including the editor and visualizations) is represented in a single rendering tree. This graphical stack already offers the possibility to create various kinds of graphs. The Grammar of Graphics is of interest for us, and we'll likely look at it in more depth in the near future, because it allows us to define graphs declaratively and serialize them across network.
What kind of analyses do you do? Only for data, or also for systems?
Also, if you have a chance, I would be interested in what you think about the distinction we make between defined and dynamic exploration towards the end of this chapter: https://medium.com/feenk/rewilding-software-engineering-a360...
Reading a book is entirely voluntary. I was offering it as a suggestion for the person that had the original question and which seemed interested in learning more.
"The goal? To make the inside of systems explainable.
[image]
caption: "The need for moldability is pervasive. The treemap shows the classes of Glamorous Toolkit organized in packages. Blue denotes classes that contain at least one contextual view; green shows those that have an example."
I have no idea what that caption, nor the image, means, nor what it has to do with making the inside of systems explainable.
We removed the image and pushed the video up. Is this better?
Technology is too fragmented - day to day many of us depend on a ton of tools to go by our (work)days even for simple stuff. Log into console of X, Y & Z platform or tools (say X = Jira, Y = AWS, Z = repo) to introduce a new change/feature/bugfix whatever. Then switch to IDE of choice to eval code, then browser to read the docs, then Google/Claude to ask questions, and then be interrupted by a meeting, take notes, ... and on and on
I see an opportunity here using something like this to unify your entire workflows/data-from-tools/tools into a uniform system you can query to get answers without having to jump through hoops (and give up). It appears investing time in building a repertoire of tools with something of this sort helps one automate or quicken chores (at work or at home even?)
What else could you do with this apart from what's in the demos? Some "can it do this?" questions if anyone who has used this could helpfully answer are:
* organise meeting notes across various topics and auto-compile a searchable "decision log" that you can drill in to dive into the context at a future date?
* connect requirements (specified in excel) to JIRA tickets and Code? so you can jump back and forth in a single GUI
* Log hours you have worked on something
* create up to date management process reference / checklist along with escalation contacts, response templates, ability to engage others on roster, and later bring together all the information into a automated PIR timeline and other details
* display system metrics of deployed services in AWS based on complex rules and provide local alert
* maintain a schedule of your kid's swimming lessons
* Notion like "verification expired" notifications
* Live tables (say of stock market tickers)
That intuition is quite right! If you look inside the environment, you will see multiple case studies. These are not things you do with the environment. These are things we've used the environment for. They are examples of what you can build. And if you look closer you will see different classes of problems. These are classes of problems for which the industry offers significant vertical solutions. Yet we show them addressed with much less energy, uniformly and much more contextualized. The idea is that if this is possible, it means it's also possible to produce tools for arbitrary combinations of problems.
If you intend to explore it further, please do let us know how it goes.
However, I downloaded the app but cannot figure out how to view my own source code. None of the example videos that I can find, show me how to use an existing local git checkout of source code.
I still do not understand what "moldable development" means. To me this implies a different paradigm for building applications, which does not seem to be what's offered. I don't understand what a "micro tool" is, is it a unit of code? Am I missing something here?
Glamorous Toolkit is indeed built by the team at feenk, which is a company. However, we created the company to fund the research, not the other way around. Everything we do is free and open-source.
Our goal is not to build Glamorous Toolkit, but to validate the idea that what we call Moldable Development (programming through contextual tools) leads to explainable systems.
We start from legacy systems because that's a hard problem that is not yet solved. If we are to find a new way of working, it should work in the least favorable conditions. With legacy systems, we have extreme combinations of technologies and inter-twinned domain knowledge that we have to make sense of.
At first the approach was called humane assessment, but along the way we found that it actually does change forward engineering as well. For example, we've seen cases in which startups produce pitches for investors right from the development environment. Or teams that put a face on domain-driven design by showing the domain to business people from the development environment leading to co-development. Glamorous Toolkit itself is an extensive case study of Moldable Development, too.
More recently, we also apply the same ideas to creating editing experiences as well. Imagine editors of generic languages that understand the framework or the domain and that offer inline activities.
Yet another application area is that of code transformation. It turns out that we can describe large scale code transformations through contextual transformations as well. This then allows us to evolve large code bases seamlessly.
This can work if creating a new experience is inexpensive. And this can be achieved if we can compose the overall experience out of tiny pieces. Micro tools like views in the inspector, custom debuggers, dedicated editors or even transformations.
There are a few more details at moldabledevelopment.com including the beginning of an open book I am writing with Simon Wardley.
Does this address your questions?
I think it has finally clicked in my mind. You're building a way to represent systems that is richer than text, which ideally reduces cognitive load when trying to understand or modify them. Since every system is different, this requires custom code, and that's normally expensive. GT is an attempt at reducing the cost of developing custom meta-programming tools. Building systems with rich representations from the very start is moldable development. Is this accurate?
Indeed, we regard software engineering as being primarily decision making. This is a stark departure from the typical perception of software engineering as a construction activity.
Once you take this path, the tools are going to be different. So different that they will appear odd to most people used to the other point of view.
For example, a typical development environment will start with an editor. But editing should come after reading. So, that design is really not that ideal. Having the editing come at the end is perhaps more appropriate. And there are several other such consequences that stem from that original difference in points of view.
The graphical part is one source of difference, but there others as well. For example, chapter 4 in the Rewilding Software Engineering that Simon Wardley and I are writing compares what we call defined explorations (as seen in Jupyter notebooks) with dynamic explorations (as experienced in Glamorous Toolkit): https://medium.com/feenk/rewilding-software-engineering-a360...
(I looked, but couldn't find that in the documentation)
Will keep working with and experimenting with it.
A single tool in the toolkit is already equivalent to notebooks, at least from what I glimpsed at the introductory video. Then you have the rest of the tools, how it can easily inspect the objects, and probably manipulate them.
This is probably one of the future ways in which we will work in programming in the future, when someone creates some similar tool, around a mainstream language, that can easily interact with LLMs, APIs, and data visualization tools.
In fact, there is already LLM integration including programmable chats with the possibility of contextualizing the interface of each message in the chat :)
Plain Pharo is a really nice environment too.
You are not supposed to "run" the tool for your program. You start with a question about your program and then build a tool that shows it.
Take a look here at an example of an explanation of a algorithm written in Python (we took Andrej Karpathy's tokenization algorithm): https://x.com/compose/articles/edit/1822723570574688256
If you want more details of how it fits in the larger picture, take a look at chapter 4 of the book I am writing with Simon Wardley about Moldable Development: https://medium.com/feenk/rewilding-software-engineering-a360...
If you have time and inclination to look at these, I'd be interested in feedback.
We are trying a different way of explaining starting with the overall problem of how to make decisions in software engineering in a book Simon Wardley and I are writing in the open. Perhaps it's of interest and sheds a bit more light: https://medium.com/feenk/rewilding-software-engineering-900c...
https://en.wikipedia.org/wiki/Structure_editor
https://martinfowler.com/bliki/ProjectionalEditing.html
Is gtoolkit the most advanced Projectional Editor or Structure Editor Editor so far?
We were certainly aware of Intentional Programming. And indeed, Glamorous Toolkit does have a language workbench underneath with which we can create editing experiences for various technologies.
But we started from the "reading" part of software engineering not from the "writing". That's because "reading" the system occupies the largest amount of development effort and it's the least optimized activity. Through contextual tools we can improve it manyfold. And this, it turns out, leads to a new way of "writing" as well. We call it Moldable Development.
I suspect this paradigm would be better served by nuking the implementation and reframing this around AI with tool usage. That seems where development is going
In any case, we wrote a bit about what we think of the intersection between Moldable Development and GenAI as explanations here: https://medium.com/feenk/rewilding-software-engineering-a360...
AI can use tools, including everything GT can use. MCP is an emerging "standard" for how to make tools available to AI
In GT, we have integration with LLMs as well, including programatic ways to create interfaces for it.
MCP is certainly interesting.
AI is also moldable development
If you had to ask what AI tool usage is, then you have missed significant aspects of the shift
Moldable Development is first and foremost about the interface. AI is an engine. The interface is the thing you interact through. Like the chat. Or the editor with chat abilities. That little layer is worth paying attention to. Because it will define how you think about your interactions with AI.
I'd still be interested if you have a specific point you disagree with in relation to what I wrote above and in particular, what we wrote in our little book (the link is above) specifically about the topic of this conversation.
I learned oop with Smalltalk so the syntax and feel aren't a problem.
I think code organization is the biggest weakness of the system. I like to zoom around a codebase in vi.
Very cool concepts though. Do you have a list of these types of development environments? I recall one that was demonstrated on a graphical output and another "next phase" that was working out keeping up with different database sources. I believe it was a different group, but I can't remember the name of that one.
Awesome work, and a field with massive potential. It's an environment that seems to work better than no-code and low-code environments.
>The only problem is that they seem to get unwieldy after a certain point. The view of all the different tools / libraries that come with it at the end of the presentation shows that.
That view at the end does not show that they get unwieldy at all. It shows that the contextual tools were needed everywhere. If the cost of tools is so low that you can amortize the cost of a tool on the first use, you can literally throw them away after that first use. In fact that's the fate of most tools. Those thousands of tools that you can see in a GT distribution are those that proved to be reusable. Many more were not :)
There were many tools that showed some visualization. But what we try to show with GT is that there exists a way to tackle arbitrary problems. This is possible because we see the environment itself as being a language made out of visual and interactive operators that can be combined in many ways.
> "Glamorous Toolkit is the Moldable Development Environment"
So it's some sort of an IDE? What does moldable mean?
> "Make systems explainable through contextual micro tools"
What is a "system" in the context of an IDE? "Contextual micro tools" also sounds completely abstract.
> "Each problem about your system is special. And each problem can be explained through contextual development experiences. Glamorous Toolkit enables you to build such experiences out of micro tools. Thousands of them ... per system. It's called Moldable Development."
... this does not help at all. Just more words without meaning.
Next, there's the video. For somebody with zero context so far, why would they sit through a 46 min low quality video?
tudorgirba - if this is your project, you really need to focus on getting the top half of the page right. People won't watch your video, no one will read your book if you can't give them a hook they understand.
Use words and phrases with concrete and well understood meaning with adjectives:
* Don't say "micro tool". Like Posix utilities? What is a tool? What makes it micro?
* Don't say "contextual development". Isn't all development contextual?
* "moldable" - no one knows what this means, don't force them to try and figure it out.
* Don't say "system", it is too abstract.
For example, "Glamorous Toolkit is an IDE for literate programming with first class support for interactive visualizations". If you can't get that sentence right, people just won't invest in learning more about your platform.I agree that the message is not yet clear for most. We can see it in these threads quite well. Now, this is not the first one we are trying, and we will continue to try further :).
The sentence you provide is certainly interesting because it is relatable. The problem is that it talks about a fraction of what we want to convey.
At this point in time, as we do not know how to convey the idea succinctly, we are looking for people that will take the time to look at the more elaborate explanations. It turns out that there exist such people. It seems to me that you might be inclined to look at it, too.
Please do let me know if you do. I would offer to show you around. And who knows, perhaps you can contribute a better presentation for what this is. What do you think?
That's ok! Hint at it and let people discover it instead of trying to force them from the get-go. Utilize progressive complexity; start simple, from first principles, and add complexity in bite-sized chunks. Show, don't tell.
No one wants to have to learn an entire philosophy before they can start using a tool.
For inspiration perhaps review how other very deep programs represent themselves for example orgmode.org. One caution there is that orgmode itself is famously obtuse for beginners.
Lastly, it is a bold statement to say something like "we have discovered a new development methodology, and have designed this toolkit around that philosophy".
Such a statement requires a ton of evidence that such a methodology is useful, and currently there simply is not enough.
orgmode is certainly interesting, but again, its goal is a (small) subset of what is achievable with GT :). And as you say, even that is hard for beginners.
> Lastly, it is a bold statement to say something like "we have discovered a new development methodology, and have designed this toolkit around that philosophy". Such a statement requires a ton of evidence that such a methodology is useful, and currently there simply is not enough.
I am well aware of what that statement says. I did not utter it in the first 10 years of this journey. But by now, I do believe we do have the evidence, and a good deal of it is even available publicly and freely. Of course, there is still this little issue of people actually taking the time to evaluate that evidence. If people are not going to look at the evidence, it's never going to be enough. And that's just fine because eventually, some people will look at it :).
We really do not advertise our marketing ability. Only that we are researchers and engineers that might have found a solution to a large engineering problem :).
Or put it differently, would you rather take engineering advice from marketeers? :)
Hope this helps.
"Make systems explainable through <unexplainable>contextual micro tools</unexplainable>"
Making writing in plain English explainable is out of scope :))
In fact, you might not need MCP for that either. There already exist programmable abilities to work with LLMs within the environment.
the fewer operating systems and environments there are, the lower the incentive to make portability a feature.
GT works on all desktop OSes (and on Android). It works with Git for all sources. It can interoperate with the file system. It works with other runtimes like JS or Python. It works with the debugger adapter protocol to help accommodate other runtimes. It works with language servers, too. It even interoperates with an embedded webbrowser (through WebView on Mac and Windows) both ways.
That's not quite a lack of interoperability, or?
The fact that GT is as interoperable and portable as you describe, yet it still receives such comments shows how short-sighted many people in technology are, or have become.
We are in violent agreement :-). I have watched your demos at UKSTUG and my jaw has been on the floor -- very impressive.
I am glad you found the demos interesting. I’d be interested what made them attractive from your point of view.
You have now given me enough reason to binge watch all at https://www.youtube.com/@gtoolkit :-)
From our point of view, the interesting bit is that we show that we can use the idea of contextual tools for every development problem at any abstraction level. That's counter intuitive, but we worked like this for 10 years and we did not find a counter example yet :)
With the territory also comes that it's somewhat hard for newcomers to understand why the tool is important to longtime converts. The clear use cases seem mundane. 'You can edit text in it'. 'You can execute code in it'. 'It lets you debug programs'. To which one might respond: 'Yeah, so? Why do I need elisp/Smalltalk? My Electron app already does that and it doesn't come with weird Internet nerds.'
If you get into it and later find that it helped you out to be able to deeply inspect or reprogram your notetaking, program execution, feed reader, git environment, then you'll start to think that it has something special. But that might take a while, and most of the way there you could just as well have used some other tool instead.