Why is this site built with C(marcelofern.com)
153 points by todsacerdoti 2 days ago | 32 comments
kianN 1 day ago
One of the most commonly cited drawbacks about C (limited standard library and package management) is something I’ve grown to enjoy when working in a well-loved C codebase.

Instead of learning and working with bloated tool chains, every tool in the repo has been built or added carefully for the task at hand. Simplicity provides a lot benefits over time.

FiniteIntegral 1 day ago
Even though I love 3rd party tools (SDL my beloved) I still find novel uses in the C library. Especially for string-related problems people say "just do it from first principles". Sometimes snprintf is enough for my debugging!
01HNNWZ0MV43FF 1 day ago
I don't usually have tools in my repo, I only have my own code
dezgeg 1 day ago
autotools would like a word with you
kianN 1 day ago
Haha I definitely greatly benefit from autotools infrastructure to such a degree that I took it for granted in my above comment.
klysm 1 day ago
C toolchains are the most painful to set up in my experience and have incredible bloat. Familiarity is not simplicity
dig1 1 day ago
Maybe I'm missing something, but what is painful to set up with C toolchains? On almost every *nix, (g)cc, vi(m) and make are enough to be dangerously productive. Sure, you can overcomplicate stuff with modern IDEs and IDE-like tools, but that is out of the realm of the usual C toolchain.
klysm 1 day ago
I’ve never seen a real project that doesn’t use make, cmake, or some other glue on top of gcc/clang. Dependency management is absolute hell as well
BoingBoomTschak 1 day ago
Are you really putting make and CMake in the same bag? POSIX make and Plan 9 mk are incredibly simple, compared to autotools/CMake; even BSD/GNU make, with all their extensions, are quite simple.

Personally, after having gone through a lot (GNU make, sh script + POSIX make, CMake + conan at work), I think a simple POSIX 2024 make + pkgconf(ig) really is enough for simple programs; and Windows devs can use WSL or whatever.

klysm 1 day ago
They themselves might be “simple” but every real usage I’ve seen has been an impenetrable shitshow.
immibis 1 day ago
Dependency management in C is the hell that you and your dependency make it, alone. You can try to copy a .a file and a directory of headers into your project, or you can try to make a massive graph of 1000 dependencies that needs a SAT solver to untangle.

Dependencies are hell in JavaScript, Haskell, and Python too, but you don't notice because the computer works through the most common hells automatically... so developers don't hesitate to create hells. In C, they hesitate.

Ygg2 1 day ago
> In C, they hesitate.

Looks at most Linux distros.

Are you sure about that?!

18 hours ago
myko 15 hours ago
i've started building my C projects with zig, it works quite nicely and has decent support for pulling in C dependencies as well (e.g., https://github.com/allyourcodebase/libxml2)
dietr1ch 1 day ago
> dangerously productive

As in SIGSEGV dangerous? C is a language so simple that together with the lack of libraries it'll drag you down to problems you were not going to stumble into in most alternatives.

Sure, eventually you'll get your own scars and learn the hard way lessons that will stick and give you better intuition on how things work, but I don't feel there's need to keep using C these days beyond learning and doing specific low level stuff.

cocoa19 1 day ago
I’ll take a SIGSEGV bug any day over working on memory corruption issues. I have some nasty battle scars in that area.
perching_aix 1 day ago
> what is painful to set up with C toolchains?

This reads remarkably tongue-in-cheek, especially when combined with the "dangerously productive" remark a bit later, but: navigating the maze that is picking a C runtime, a libc implementation, a compiler or potentially compiler frontend and backend, a linker, a build tool, a dependency manager, and then figuring out the linking, figuring out the dependency management, figuring out the building process, tackling version control crap (setting up submodules) if needed, then rinse repeat for every single project ever. And if you're targeting not just *nix, but also platforms people actually use (Windows), then this gets especially nasty.

Not your project? Well then you get the chance of taking a deep dive into a random assortment of these, taking up however much of your time. Or you'll just try building it and squash errors as they come. Such a great and sane toolchain and workflow.

All of this is of course completely excluding the very real factor of even knowing about these things, since being even slightly confused about this stuff is an immediate ticket to hours of research on various internet forums to even get to know what it is that you don't know - provided you don't just get sick of all this crap and move on to some other toolchain instead, or start the rote reproduction of some random dood's shitty guide, with the usual outcomes.

lelanthran 1 day ago
> navigating the maze that is picking a C runtime, a libc implementation, a compiler or potentially compiler frontend and backend, a linker, a build tool, a dependency manager, and then figuring out the linking, figuring out the dependency management, figuring out the building process,

You're misrepresenting this somewhat: all but two of those items listed need you to only "pick" a compiler and (as parent said) use Make.

The dependency manager is a problem yes, but lets not misrepresent everything.

perching_aix 1 day ago
It comes up more when you try stuff multiplatform.

Another thing I didn't touch on is the massive compatibility tables for language features one needs to look out for, in case they plan to make their code work on multiple mainstream compilers, which is arguably the prudent thing to do.

I really don't think that considering the C/C++ build story as complex and frustrating would be misrepresentative.

lelanthran 1 day ago
> I really don't think that considering the C/C++ build story as complex and frustrating would be misrepresentative.

Who was talking about C++? This thread was about C, right?

(It's possible that I didn't read very closely upthread, but I'm almost certain that we were all talking about C).

I actually agree that C++ can be a pain in the rear, especially for multiplatform, and you have to pick all your options, etc.

But C? Not so much. Even on multi-platform.

As GP (or GGP, or GGGP, I forget how far upthread it is) said, with C all you need is a compiler, an editor and Make. That's pretty much still true, even if you're using third party libraries.

xigoi 1 day ago
The compilers require a bunch of arcane flags to actually do something useful, so you pretty much need some kind of build system or at least a shell script.
gerdesj 1 day ago
I have been a Gentoo Linux aficionado for decades. When you set it up you define a set of CFLAGs and CXXFLAGS etc. Those are your globals, your defaults.

I am also a human. When I deal with another person I have never met before, I have a set of defaults. I override them as required.

gcc (at least) only requires an input and will behave reasonably and generate: a.out.

You might like to pass -O2 for something a bit more complicated. Beyond that then yes you will need to get into details because any engineering discipline is tricksy!

xigoi 1 day ago
> Beyond that then yes you will need to get into details because any engineering discipline is tricksy!

When you have multiple files in your project or are using external libraries, pretty much any other programming language will know what tt do. Only C requires you to manually name them in the compilation command even though they’re already named in the file you’re compiling.

dmitrygr 1 day ago
> compilers require a bunch of arcane flags to actually do something useful

   $ gcc lolno.c && ./a.out
   lol, no
   $
xigoi 1 day ago
Why is the executable named ./a.out and not ./lolno? Why are warnings disabled by default? What if you need to use <math.h>? And that’s just for a single file.
sudahtigabulan 1 day ago
> Why is the executable named ./a.out and not ./lolno?

  $ ls
  lolno.c
  $ make lolno
  cc     lolno.c   -o lolno
  $ ls
  lolno lolno.c
pitaj 1 day ago
Now do a project with multiple files.
spc476 1 day ago
Sure. Here's viola, a web browser from the early 90s, with a replacement makefile (GNU make) that's a bit more sane:

    CC      = gcc -std=c99 -Wall -Wextra -pedantic
    CFLAGS  = -g
    LDFLAGS = -g
    LDLIBS  = -L/usr/X11R6/lib -lICE -lSM -lXpm -lX11 -lXext -lXmu -lXt -lm

    VIOLA_PATH := $(shell pwd)/resources

    override CFLAGS += -DPOSIX_C_SOURCE=199309L         \
                    -D_POSIX_SOURCE -D_XOPEN_SOURCE     \
                    -D_BSD_SOURCE -D_SVID_SOURCE        \
                    -DDEFAULT_VIOLA_PATH='"$(VIOLA_PATH)"'\
                    -DVIOLA

    %.a:
            $(AR) $(ARFLAGS) $@ $?

    viola/viola: $(patsubst %.c,%.o,$(wildcard viola/*.c)) \
                    libIMG/libIMG.a     \
                    libXPA/src/libxpa.a \
                    libStyle/libStyle.a \
                    libWWW/libWWW.a     \
                    parmcheck.o

    libIMG/libIMG.a     : $(patsubst %.c,%.o,$(wildcard libIMG/*.c))
    libXPA/src/libxpa.a : $(patsubst %.c,%.o,$(wildcard libXPA/src/*.c))
    libStyle/libStyle.a : $(patsubst %.c,%.o,$(wildcard libStyle/*.c))
    libWWW/libWWW.a     : $(patsubst %.c,%.o,$(wildcard libWWW/*.c))

It's 155,000 lines of C code across 361 files. Not shown are the nearly 900 lines that make up the dependencies, but using `makedepend` (which came with my system) makes short work of that. I have a more complicated project that compiles an application written in Lua into a Linux executable. It wasn't hard to write, given that you can always add new rules to `make`, such as converting a `.lua` file to a `.o` file:

    %.o : %.lua
            $(BIN2C) $(B2CFLAGS) -o $*.l.c $<
            $(CC) $(CFLAGS) -c -o $@ $*.l.c
            $(RM) $*.l.c
Okay, that requires a custom tool (`bin2c`) but can other build systems do this? I honestly don't know.
Calavar 1 day ago
That still doesn't require any flags. Where it starts to get complicated is when you link in system libraries, because they vary between OSes and different versions/distros of the same OS. If you need to support multiple platforms this quickly becomes combinatorially complex, which is where Makefile hell comes from.
Macha 1 day ago
Great, now do TLS, a GUI, graphics rendering or database access.
ryao 1 day ago
Use stunnel for TLS. A benefit is that if you properly sandbox the daemon behind it, a compromise in the daemon behind TLS does not result in the server key being compromised.

A GUI could be done in SDL+Nuklear, GTK+ or others.

Database access from C is easy, since the databases are written in C and provide C APIs for access.

Dylan16807 1 day ago
Can you use those libraries without a bunch of arcane compiler flags? Because that's what the argument was about.
ryao 1 day ago
stunnel is not a library and linking to C libraries is easy.
johnklos 1 day ago
You're not wrong, but you're closer to wrong than right. C toolchains are the best of a collection of rather sucky things.

I can compile all sorts of things on my Mac LC III+ with 36 megabytes of memory. Sure, Perl takes nine days, but it works. What other language can actually be used on a machine with such little memory?

klysm 1 day ago
That’s a weird performance measuring stick to use, and I don’t see how it’s related
xigoi 1 day ago
From what I’ve heard, Nim is being successfully used on microcontrollers.
kianN 1 day ago
I’ve never been on a system that doesn’t have either gcc or clang built in.

But disclaimer that my experience in C is limited to a specific subset of scientific computing, so my experience is definitely limited

maccard 1 day ago
Windows, for one.
sgarland 1 day ago
That’s on you for using Windows.
maccard 1 day ago
My experience with "portable" software is that it's portable as long as you're using the same OS and toolchain as the author.

Other systems exist, and plenty of us are less belligerent about our choice of OS.

skydhash 1 day ago
As for windows, I always assume you need to install an IDE like visual studio, clion, codeblocks.
maccard 1 day ago
Cmd and powershell exist and have done for a long time. There was a period of time where the visual C++ compiler was bundled with the IDE but it’s been available separately for the best part of a decade. Various incarnations of gnu-on-windows have existed for longer than that - Msys/cygwin/wsl are all feasible options and have been for ages.

But, none of them come preinstalled.

klysm 1 day ago
gcc/clang isn’t really sufficient though right? Usually there is another build system on top
kianN 1 day ago
I only use clang with a Makefile and I’ve done some relatively complex projects albeit with only a few collaborators.
dmitrygr 1 day ago
I haven’t seen a system that has GCC but lacks make
inferiorhuman 1 day ago
Yeah, but is it a version of make that's compatible with your makefiles?
theamk 1 day ago
likely yes?

I've never had problems with make versions specifically. Usually the project requires distro at most X years old because of gcc/clang version or shared library version. By the time you solve those, your make is new enough as well.

inferiorhuman 1 day ago
I mean yeah if all you're using is Linux then the issues with buggy versions of GNU make is something you've probably not seen in a while. Apple is still on 3.81 due to GPLv3 which means bugs. None of the BSDs ship with GNU make in the base system, and I believe they all install it at gmake (which means you've gotta be careful about hardcoding calls to make.
ryao 1 day ago
It is possible to write portable make files that work with both.
maccard 1 day ago
In my experience as a regular MacOS user, the "portable" solution is to install the version of make that everyone else uses (i.e. it's not portable)
inferiorhuman 1 day ago
The pain in doing so and the very real chance you're trying to do something that can't be done in a portable manner is why things like cmake (and every other build system under the sun) exist.
lelanthran 1 day ago
> C toolchains are the most painful to set up in my experience and have incredible bloat. Familiarity is not simplicity

What are you comparing it to? C++? Java?

checker659 1 day ago
Once you set things up, chances are it'll still build just fine in 5 years tho.
jimbob45 1 day ago
Couldn’t agree more. Vcpkg + CMake doesn’t come close to the convenience of NuGet + MSBuild. The latter Just Works every time. You can’t say the same about C.
theamk 1 day ago
I think Windows is really best served by Microsoft technologies and Microsoft languages. If you want C, there is WSL.
1vuio0pswjnm7 1 day ago
What are the "least painful" toolchains to set up.

How do they compare to GCC in (a) size and (b) portability.

1vuio0pswjnm7 13 minutes ago
Size of statically-compiled toolchains for LLVM versus GCC

328.7M llvmbox-15.0.7+3-x86_64-linux

242.3M x86_64-linux-musl-native

169.4M x86_64-linux-musl-cross

1vuio0pswjnm7 39 minutes ago
Is cmake actually a requirement for compiling LLVM such that one must compile cmake before one can compile LLVM.

Is compiling python (ninja and pyYAML) also a requirement.

1vuio0pswjnm7 1 day ago
Is python actually a requirement for compiling the rust toolchain so compiling rust toolchain requires compiling python first

https://rustc-dev-guide.rust-lang.org/building/how-to-build-...

klysm 1 day ago
Then the c toolchain needs to also be considered from a bootstrapping standpoint
inferiorhuman 1 day ago
LLVM is a bit easier to set up in my experience. One of the most irritating things about setting up a GNU toolchain is the hard dependency on texinfo (which has been dropped in later versions I think).

In general I've found rust pretty easy to build.

wfn 22 minutes ago
Agree regarding easiness of building rust (`cargo build`), extremely satisfying (git clone and cargo build...)

Does anyone have any comments on Bazel[1] because I'm kind of settling on using it whenever it's appropriate (c/c++)?..

[1] https://bazel.build/

massysett 1 day ago
I understand all this, no objections at all, I just wonder if the easier thing to do here is to write the blog posts in HTML and drop them in a folder for a web server, the same way I learned to do websites 25 years ago. What's making this complicated is the Markdown and if you want something lightweight, Markdown doesn't pull its weight.
encomiast 1 day ago
It seems like we've spent the past 25 years trying to solve the big headache this creates: you have 100 blog posts, which means 100 html files. Any structural change you need to make to the site involves touching all 100 files. CSS came along and helped separate the presentation from the information a bit, but it still sucked to maintain 100 html files on the server.
mid-kid 1 day ago
I generally just have a makefile that for each page does `cat header.html page.html footer.html > out/page.html`. I realize this can be considered a "static site generator" but I think simple concatenation covers most of the painpoints of writing static sites manually, without introducing intermediate formats and complex processing.

Another option is PHP, which was practically made for the purpose of generating HTML. You can run it on your pages to generate static versions, and just use "include" directives with that.

glandium 1 day ago
Or "server side includes".
thombles 1 day ago
You're right but in my experience the annoyance factor jumps up as soon as you start wanting an RSS feed.
skydhash 1 day ago
I wonder if you can have a snippet for that (can be a script that generate the code for one item) It’s not like such blog is updated thrice a day!
imgabe 1 day ago
I sometimes go down this rabbit hole and wonder why people don't use server-side includes to just have Apache or nginx handle a common header / footer or whatnot. Seems like what they were made for. Are they slow or something?
carlosjobim 1 day ago
They're instant and they work great. PHP includes also.
xnorswap 1 day ago
Isn't this what Jekyll solves?

Posts are all written as markdown, styling and layout are centralised with layout HTML pages and CSS.

I believe indexes can be auto-updated via post metadata.

And it's all static pages once generated, so there's no dynamic load on a database.

lelanthran 1 day ago
Client-side includes are easily done in a custom web component now. I use one. You can write a simple one in about 60 lines of JS.

If all you need is include functionality, then that's the way to go for static file hosting.

jimmaswell 1 day ago
PHP is perfectly sufficient here. Add a static cache if the overhead gets too much.
ryao 1 day ago
There are static site generators. There are even plugins that turn Wordpress and Drupal into static site generators, such that you do not need to edit the files one at a time to make changes.
drwu 1 day ago
I totally agree. Just using a POSIX shell to concatenate header/footer and include the required CSS file, a simple static blog generator can be easily made. (That is what I did with mine.)

No Markdown, no Perl/Python/Ruby, also no binary program, just a few simple shell scripts and plain HTML files.

tiehuis 1 day ago
Using a portable minimal markdown dependency (such as cmark [1]) I think markdown can be quite a low barrier here. I personally do similar to what you have described on my blog, with an additional cmark conversion and find it quite simple [2].

[1] https://github.com/commonmark/cmark

[2] https://github.com/tiehuis/tiehuis.github.io/blob/38b0fd58a2...

rglullis 1 day ago
> Why Is This Site Built with C

Because you are developer who enjoys coding, and you will find any and every excuse to spend more time writing code than you care to admit.

sgarland 1 day ago
If you are a developer who does not enjoy coding, I question your career choice.
indigoabstract 1 day ago
Some developers enjoy coding and some enjoy paychecks. Some enjoy both.
knighthack 1 day ago
Lack of passion and enjoyment for an art form generally correlates with a mentality of doing 'just-enough', rather than a keen desire for craftsmanship.

I think those who enjoy paychecks but don't enjoy coding are likely to be incompetent developers. Which is not a desirable end.

perching_aix 1 day ago
They are probably questioning it too.

But also, enjoying coding recreationally and enjoying coding for work are very different things.

rglullis 1 day ago
[dead]
m000 1 day ago
I think TFA is unfair wrt pandoc's dependencies. I'm not sure if the listed "ecosystem" is what you need to build pandoc from source, or just the result of shitty packaging of pandoc from the OS package maintainers.

For the record, the .deb download from [1] gives you a 146MB statically linked pandoc executable that depends only on libc6 (>= 2.13), libgmp10, zlib1g (>= 1:1.1.4).

[1] https://github.com/jgm/pandoc/releases

zahlman 1 day ago
The "Markdown" package on PyPI gives you a <1MB (including precompiled .pyc; excluding Pip, which is trivial with the know-how) virtual environment that depends only on the Python runtime. Pandoc is massive overkill if you're only interested in converting one format to one other format.
tony-allan 1 day ago
The C program converts Markdown (written by the author) to HTML and is run by the author (and therefore no attack surface). Actual hosting uses GitHub Pages so there is nothing to worry about there. Simple; job done.

"The only work I needed to do was to write a C script (which turned out to be ~250 LOC) to call md4c functions and parse my md files, and then chuck those converted files into the GitHub Pages repo."

zoogeny 1 day ago
> An alternative for saving time with recompilation was to update my script so that only new markdown files or changed ones are marked for recompilation. That would involve too much wizardry if I wanted to make the script nice and robust.

Am I crazy that this doesn't seem like too much wizardry to me? I mean, I have a source directory and a destination directory which gives me a set of unambiguous file to file mappings. At which point I'm looking at comparing some kind of file timestamps. Add in checking if the destination file exists at all and it looks like 2 or 3 system calls and a couple of comparisons.

However, I agree with almost everything else in the article, even just blowing things away and rebuilding every time if it is fast enough. I was musing today that with LLMs we might see a resurgence of personal libraries. Why would I take in a dependency (227 transitive dependencies for that one project dependency!) when I could implement the 20% of functionality from that bloated library that I need? In some circumstances it might be easier to describe the functionality I need to an LLM, review the code and then I have my own vendored implementation.

But if this was me, I would probably choose Go over C. The speed would be good enough, GC for memory safety and convenience, unlikely to be going away in the next 50 years, simple path from static to dynamic generation with standard lib webserver, etc.

bowlofhummus 1 day ago
They're probably already using Git for the website so using a pre-commit hook is by far the easiest way.
miguel_martin 1 day ago
I'm also using MD4C for my website with KaTeX for latex rendering and utterances for comments. Instead of C, I'm generating my site with Nim using Karax + Mummy and publishing to Github Pages with Cloudflare for HTTPS.

Here is the source code (it should be easy to fork for your own site): https://github.com/miguelmartin75/miguelmartin75.github.io

- To generate a page: https://github.com/miguelmartin75/miguelmartin75.github.io/b...

- See usage: https://github.com/miguelmartin75/miguelmartin75.github.io/b...

    - Install nim & run `nimble setup` in repo to install necessary packages

    - Run `nim init` to enable deployment/publishing the site (using git work trees)

    - Serving it locally before deploying is as easy as `nim dev`, see: https://github.com/miguelmartin75/miguelmartin75.github.io/blob/master/config.nims#L15-L16

    - Serving it locally with my private notes (710 files): `nim devpriv`, see: https://github.com/miguelmartin75/miguelmartin75.github.io/blob/master/config.nims#L12-L13

    - Generating the site: `nim gen`

    - To publish the site: `nim publish`
I use Obsidian to write notes and can put top-level YAML metadata on each page and retrieve it in my generator, see: https://github.com/miguelmartin75/miguelmartin75.github.io/b...

For the local development HTTP server (using Mummy), you can refresh the page to regenerate the markdown. Re-generating the page on file changes and refreshing the page with websockets is on my backlog.

Previously I was using GatsbyJS, but have had a lot of trouble with dependencies constantly updating, copying the setup to another computer and generally it was pretty slow to generate the site. Now I can generate my site in <0.1s - even if I include all my private notes (710 markdown files).

assimpleaspossi 1 day ago
We did the same. We created bespoke web sites entirely in C. One of which you may have visited (years ago). And we used C for the same reasons the author mentions and agree with his last paragraph.

Why? At the beginning we were frustrated trying to find one true solution--granted, the perfect solution--to what we wanted to do and how we wanted to work for 20 years. We found that C interfaced with everything, worked everywhere, every programmer knew it, and it could do anything we wanted it to do. If there wasn't a library for something, we could easily make our own.

I could go on and on but I won't. I closed shop just a few years ago cause my reasons for doing that work went away.

1 day ago
rorads 1 day ago
I appreciate the dedication to minimal performant code. For me a standard Jekyll setup with a theme and github pages is absolutely fine. It's slow and a bit annoying sometimes but it's very straightforward markdown to html and compiles categories into URL structure. It's also easy to throw in some JS if you need, customise CSS etc.
algo_lover 1 day ago
Still not sure why you chose C though? You could have chosen anything which meets all your requirements.

Many languages have markdown parsers in them, produce binaries, and are portable.

apitman 1 day ago
This is covered in the article.
Brian_K_White 1 day ago
Having read the article, I somehow know why they chose C.

C satisfies all their priorities, and there are not many, or even any other languages that do as well, and none actually better.

gorgoiler 1 day ago
This author is great. Their blog engine really ought to link to the top level. There’s lots of content.

https://marcelofern.com/

I am immediately intrigued about doing code review in Vim (from their posts) as well as using vale.py to lint my prose (from their GitHub.)

jopsen 1 day ago
I'd rather trust 5 libraries written in a memory safe language than one written in C.

Sure, if the memory safe language comes with a package manager that happens to have postinstall hooks, the picture might be different.

But scanning some go packages to see that they don't do I/O is rather feasible.

tuveson 1 day ago
Why? It's a static site generator. He controls the inputs - there's basically no attack surface. I can't think of a situation where lack of memory safety would be less of a problem.
apitman 1 day ago
What type of problem for the author's workflow (rendering markdown files) are you expecting as a result of them using C?
exe34 1 day ago
what's the trust model that you think the author should be using here?
gbraad 1 day ago
The pandoc generation was slow? Easy solution is to only generate the file that is new and/or refresh all when the template changed.
aninteger 1 day ago
The article also mentioned the size of the dependency (over 400 mb)
gbraad 1 day ago
On Fedora this installs 187 MiB. Surely not small, but everything has dependencies and can have issues with them. It is a trade-off. It was an interesting article, though my solution was simple and worked for me.
billforsternz 1 day ago
I like this, and used the same approach myself (well C++ that's basically C plus a a few nice things from C++), including the same single md4c external dependency, for a little static website generator of my own that I use extensively myself https://github.com/billforsternz/winged-spider. I haven't touched it since I created it, but I use it all the time, eg https://wellingtonchess.club and it builds my site instantaneously. I then use Filezilla to deploy like a primitive caveman.

I didn't have problems with a C++ toolchain, I just go g++ *.cpp. No make, no cmake, no autotools (shudder). It's fine for small projects with a handful of source files and a few thousand LOC.

rgovostes 1 day ago
Many tools in the web publishing world are fixated on build speed. I would think transformations of text into templated HTML ought to be basically instantaneous on modern CPUs and SSDs, even for slow interpreters, for websites of even substantial size. It doesn’t exactly require algorithms with quadratic runtimes.
TrayKnots 1 day ago
Well, I have in essence nothing against this post. I agree with the notion that too many dependencies are not necessary. That we can keep lots of things simpler.

I have nothing against directly implementing this in C or just writing markdown files and have the auto-translated into HTML.

I just don't like his arguments about it must be fast to recompile everything. I am writing this comment, and this is going to take me a few minutes. After all, I am thinking about what I am writing, typing it out, thinking some more. And then, the deploy is the thing that go the author? Really? Time to server is an important metric?

Let's be real, nothing would be lost if it took 5 minutes. He would send it off and 5 minutes later, his phone buzzes, notifying him that it is done.

Alright, he found a way to do it in under 10 seconds. Cool. Good for him. Now that it is built, there is nothing bad about it. I just don't see how this was ever an important KPI.

IshKebab 1 day ago
Yeah I use Pandoc markdown to HTML for my site and it's easily fast enough. Especially if you use a Makefile.

I think this was just a fun challenge rather than to get any kind of useful advantage.

BlimpSpike 1 day ago
Having the MD file and the website open side by side and being able to see immediate updates as you write is valuable.
IshKebab 1 day ago
You don't need to actually generate HTML for that though. VSCode will show a markdown preview, and there are tons of other markdown editors that can do that too.
skissane 1 day ago
> I just don't like his arguments about it must be fast to recompile everything.

C isn't necessarily fast to recompile everything. Too much preprocessor magic and the compilation can slow down a lot.

And a lot of the reason for that, is that C's preprocessor is inherently weak – e.g. it doesn't explicitly have support for basic stuff like conditionals inside macros – but you can emulate it with deeply nested recursion – which blows up compilation times enormously. If C's preprocessor was a bit stronger and natively had support for conditionals/etc, one wouldn't need these expensive hacks and it would be a lot faster.

Example of real world project where C preprocessor slowed things down a lot is the Linux kernel: https://lwn.net/Articles/983965/

catgirlinspace 1 day ago
I think the part you quoted is about it needing to be fast to compile from markdown to html, not compiling the C program.
TrayKnots 1 day ago
yep
grandempire 1 day ago
This problem would also be solved by using make with th his original pandoc script (or just use perl markdown).

Make determines which files changed and their dependencies and reexecutes your script to regenerate the necessary output.

bbkane 1 day ago
I appreciate his argument about tooling lasting a while, but for me the most important thing is the durability of the content format (markdown), and, secondly, owning the domain name.

If, for example, their posts were stored as Word documents or Google Docs, they would have a far fewer options for building and deploying their blog.

But, because theyre're using the (comparatively) simple markdown format, they have a lot more options.

I do something similar and I've migrated my blog from Jekyll + Github Pages to Zola + Netlify without too many issues. If Zola or Netlify go away, I'm confident I can migrate again easily.

fragmede 1 day ago
I don’t know about more, just different. Google App Script can interact with Google docs, and wiring it up so that you can make a Google doc on your phone and have it auto publish. You could also setup such a thing with markdown, just with different apps, (hopefully not via ssh) but that really seems like 6 of one, half dozen of another.

If that's a use case you want to support because of convenience, I really don't see a reason to use one over the other, other than personal preference.

bbkane 23 hours ago
Practically I think you're correct, but I personally prefer owning the files instead of keeping them stored on Google (though of course you could set up automation to export on a schedule or something).
rurban 1 day ago
while getc != EOF really? You stat the file for its size, then alloc the buffer with that size and fread into that. Or fseek the end and get the size via ftell and fseek to the front
kopirgan 1 day ago
Interesting.. Don't think Hugo needs to be as complicated as he described.

I'm using it for a couple of static sites and you can ignore most of the complexity just treat it pretty much like a md converter. There are minimalist templates available as well that get your site going.

It's lightning fast too & whether there's a GC at the back of it seems not relevant.

tasuki 1 day ago
Maybe start with preserving the urls you've created in the past?

Longevity isn't just for programs, it's also for content...

gwbas1c 1 day ago
When I was in high school in the 1990s I wrote a site generator in C. It took each post, which was just the body part of HTML, and then added a header and footer.

Almost 30 years later, there's plenty of more modern tools that will do a much better job than custom C code.

grandempire 1 day ago
When it comes to personal projects some people love to tinker with new tools, but for me any time not spent writing code is discouraging. Velocity is important for fun and experimentation.
kuon 1 day ago
I agree with the author that many projects are waaaay to big. I have been using zola and I quite like it. I like the C approach which gives an extra geek point!
enneff 1 day ago
I agree that Hugo is pretty bloated, but you could much more easily write your own bespoke website generator in Go than in C.
immibis 1 day ago
I run C on the server. Static content blocks from a directory are converted to #defines by a Python script, so I literally #include "staticpages.inc" and reference them by name. Another Python script generates the code for building the navigation menu (dynamically based on the current page). Another one generates the code to parse the URL and select a request handler. Most pages are header+navbar+content+footer, so I wrote a utility function:

The blog section is a memory-mapped set of C structures, but in the next rewrite I'll directly embed blog posts in the program, too.

I did it this way instead of using a static site generator because I realized that there's no such thing as a static site. My server has to run code to serve any website, including a "static site", so why arbitrarily limit it to the code for loading static files? https://www.immibis.com/blog/12

Not a single library is used, besides libc - at least in the backend code. I use nginx as a reverse proxy and SCGI as the interface from nginx to the C backend.

Alifatisk 1 day ago
I built a site using D once, I almost forgot how instant websites could be.
zahlman 1 day ago
> Why Is This Site Built With C

The short version: the author eventually decided that statically generating the site would require literally only a Markdown parser and a wrapper to iterate over .md files and fix internal links and add a common header and footer. The author then found a Markdown parser implemented in C and therefore interfaced to it in C (which of course involves a bunch of buffer manipulation and memory management and looks incredibly clunky compared to any obvious choice of language).

> I looked for a better alternative and found md4c, which is a parser written in C with no dependencies other than the standard C library. It also has only one header file and one source file, making it easy to embed it straight into any C project.

I see three of each looking at the repository's src/ folder.

> My website converter script, which is all in this 250 LOC source file (less md4c) is feature-complete and runs on any compiler that supports the C standard from 1999 onwards. There's no platform-dependent code and it's portable to Windows, Linux, and MacOS.

Feature-complete if you're the author, I guess. Portable if you're willing to edit source code and recompile just to change file path configuration. Except for the part that makes a system call to `find` (prepared in a statically allocated buffer using sprintf without bounds checking). As far as I can tell, <ftw.h> is also Linux, or at least Posix-specific.

250 LOC if you ignore the author's own array, file and string libraries as dependencies.

And, again: that 250 LOC is way more than it would be to do the equivalent in a higher-level language.

> It seems better than some alternatives like pelican which is written in Python and thus will be slower to parse md files.

You'd think experienced C programmers would know to benchmark things and not make assumptions. And if the goal is to find just a markdown parser, this simply doesn't reflect looking hard enough. For example, I can easily find https://pypi.org/project/Markdown/ ; in fact it's what gets used under the hood by Nikola (and I'd guess Pelican too). It's less than 1MB installed, pure Python with no further dependencies. I imagine the story is similar for Hugo. Sure, I wouldn't at all be surprised if it's slower, but it seems like there's considerable acceptable slack here. It's good enough to enable Nikola to implement an auto-refresh using a 1-second watchdog timer on the source directory, for example, despite that it's also feeding the Markdown result into a full templating system.

Klonoar 1 day ago
...am I taking crazy pills or reading this wrong somehow?

> I've been writing about things on a personal website since 2017.

> GitHub pages didn't exist at the time

GitHub pages existed before 2017. Like, almost a decade before.

goykasi 1 day ago
I thought the same. He also mentioned hosting on DO and droplets not existing. Im 99.9% sure they existed too. Droplets were literally what they were selling if Im not mistaken. gmail has always been free too (from his archive.org link).

So much of this post feels like alternate history and gloating. Its cool that he wrote a wrapper around an existing library, but his main argument is that C will still be around in "the upcoming decades." Im willing to bet money on Hugo/go existing and still working too -- or any number of other static-site generators.

edit: This can all be done with just pushing markdown files to github anyway. Github will automatically generate the html if you do some basic configuration in your repo.

sgt 1 day ago
Why doesn't the site allow for navigating to the other articles?
foul 1 day ago
mhm. I don't understand why a makefile or a redo file is out of the picture.

Yea if you change something in header or footer build speed matters but when the HTML file is done, it's done.

TheRealPomax 1 day ago
Your server is irrelevant. If it's being visited by humans, it just needs to get your information to them in a speedy-enough fashion. It's the content you're serving that matters, not what's serving it. (5MB React bundles? You're bad at the web and you should feel bad. Pure HTML and CSS using brotli compression? You're a superhero)
immibis 1 day ago
The server technology is relevant because it controls how the information gets to humans. A site without 5MB of Javascript runs faster than one with 5MB of Javascript, all else equal. And Hacker News is a site for technically-minded people who are here to read about things like servers, so even if it was not relevant, it would still be appropriate for Hacker News.
TheRealPomax 1 day ago
To a degree, but unless you need to serve a literal thousand-plus humans per second (AI bots don't count, modern problems require modern solutions and "your own" is not a solution, use an AI maze SaaS), it's still irrelevant: every modern CPU and even inefficient scripting language can serve hundreds of concurrent connections as long as the content itself is proper content instead of "modern" nonsense bundles.

Sending pre-compressed html and css (with a sprinkling of JS at best to enhance the experience after it's already in the browser) is kilobytes of work at most (irrespective of how much it unpacks to on the client).

The biggest lie the modern web sold us is that information needs to be packed up in complex datatypes and be interpreted on the client before it's information again.