This is fantastic. I couldn't find any obvious way to search for a new page, but you can simply bang out any arbitrary URL slug and the new article will be hallucinated fresh, eg:
Edit: I've just run across the antisemitic defacement in the "stumble" feature and it makes the timing of my post appear pretty unfortunate. It's especially sad because the ability to create articles through URL slugs is super cool and I'd hate to see it removed.
Looks like some single quote escaping issue? I suspect the first link to be "Archduke Ferdinand VII's Bureau of Non-Demographic Surveys" and the apostrophe breaks the link.
It's working now and I have to say I love this. The whole project is whimsical and gives me a strong SCP vibe but (sometimes) without the creepypasta aspect. I was very pleased to see that articles generated from links retain the context of the page that created the link - and even refer back to the original page.
update: Well, this was quite disappointing. I loaded the original site again to show a friend and it generated a completely new text with a completely different story and no reference to the second article. Would have been nice if these were permanent as I had originally assumed.
Noticed it kept using the term 'resonator' or 'resonance', decided to navigate to a page for 'resonance cascade' as a joke, and discovered this fantastically broken article: https://halupedia.com/resonance-cascade
The 'all articles' section really is a dive into what happens when you allow unfiltered posting - it's a shame that it isn't clear how many individuals are creating this hateful and otherwise inappropriate titles - is it just 1 or 2 people, or has this been posted to 4chan or somewhere and there is a concerted effort to disrupt the site?
Shame there isn't a way to flag pages for removal. I was going to point my kids at this site, and it could be a great learning tool for schools, but not currently something I'd share.
Interesting idea with flagging.
We are considering 2 options:
1. You can generate aricle only if it was previously referenced in previous one
2. Flagging mechanism, now that you brought it up.
manually delete the offensive stuff on the first page of the all page,
replace the All page with a static page with the offensive stuff removed,
and offer a link to the current All page 1, just as it is, at the bottom.
Hope it would make defacing articles at the top of the alphabet sort slightly less attractive.
(Edit: Stumble is impacted? Could use rudimentary tricks to limit stumbling on e.g. religious content, and might consider not detailing the methods used specifically :) )
The mistake they made was allowing visitors to trigger the generation of articles via visiting any arbitrary URL.
A more resilient concept would have been, have a few "seed" articles in place, and then only allow for the creation of new articles by clicking a link in an existing article.
I vaguely remember a game someone made up (probably on 4chan) where the goal was to click "random article" and see how many clicks it takes to get to Hitler's page. I remember it being fun AND informative.
As the co-author of the project: the whole reason was to allow everybody to hallucinate what they want. If it was their will to research such things on there, then it shall be. But yes, it is kinda sad.
Just in the comments, right? That is where I see it. If I were the site owner I would just turn comments off. It was a cute idea when someone on HN suggested it, but without moderation open commenting becomes a cesspool in a hurry.
As it didn't generate that when I typed the title i to your search box, was there a bug now fixed? Or did you use some other path not evident on the page you linked to generate it?
There was a bug where scanning took too long with the thousands of articles in there, but I just fixed it.
You can also just type a random URL and visit it, it'll generate an article. That's what I did before I fixed the search issue, and I usually just do that to avoid the search route.
So by "I made the same thing months ago" you didn't mean "an article about the great pigeon census" (your link is created May 6) or "an encyclopedia of hallucinations" like the OP, but just "an encyclopedia with some articles AI wrote". What's the point?
It's pretty fun to poke at! Although it's certainly difficult to be exact, it would be neat if generated pages used the context of the pages they were linked from (ideally, all pages that link to it) to guide the direction of the page. From the ones I generated it seemed they were mostly independent.
You not only made this excellent source of entertainment, you are also helped everyone find their unmatched socks, ensuring that "no individual would ever be forced to wear a mismatched pair". (Source: https://halupedia.com/humanitarian-accomplishments-of-the-on...
I’m curious, too. But it could probably run locally with a small model, right? The performance is stellar, so that suggests some hardware acceleration is being used, but that could all be a local system.
I get that, but how does it serve the generated and cached ones seemingly faster than Wikipedia? (My guess is that single-page applications, which this one seems to be, just need less round trips between navigations or something?)
Nice job, this is seriously one of the fastest websites I've ever used!
I feel like I have some minimum latency "priced in" to my expectation when I click a link on a static site, so yours feels uncannily like it's somehow able to anticipate my clicks, adding to the surreal atmosphere.
Nice, that's what I used for by LLM-backed HTTP server [1] a while ago as well :) It's a shame they got rid of the generous free quota a while ago, which is why I had to shut my public instance down.
I wouldn't. And, I'd think less of anyone who does make that argument.
Anyone of reasonable intelligence can easily tell this is a parody of an encyclopedia. Saying this is bad for the web is like saying The Onion is bad for the web.
What would you think of a person who said that they are already convinced that an opposing view could not be correct without even hearing the arguments for it?
> Funny, but you could argue this is actively harmful to the web.
Was not followed by an actual argument that it is harmful to the web. The comment was an assertion, not an argument.
So we are left in the inconvenient position of rejecting hypothetical arguments, and others defending the philosophical possibility that a valid argument does exist.
Without the argument being explicit then there can be no retort to it, so closing your mind before hearing it demonstrates that the argument itself is irrelevant. One could thus conclude that the existence of a valid argument is not itself a condition for my question.
We also shouldn’t close our minds to the possibility of an eigen-retort, one which covers all possible arguments already made or argued in the future regarding the consequences of this website on the health of the Internet.
Someone who is aware of the eigen-retort would therefore not need to hear the argument.
Since I haven’t heard either the hypothetical argument or the hypothetical eigen-retort yet, I’ll withhold my judgement.
I concede that the my question was loaded, but the assumptions behind it are grounded in practical experience. Regardless, I have not committed myself either to the existence of an argument, I just stated that its existence was not a condition for the validity of my question for SwellJoe. The statement which was made can mean a number of possible things, but we cannot know what unless the question is answered. So the existence of the retort is revealed by the question, and until that reveal we are limited to questions or assumptions.
I'm reasonably confident there is no argument that I would buy.
I hate AI slop more than average, but this is not slop being injected into human places. This is a dedicated dumping ground for slop, paid for by the owner/instigator of said slop. I don't have to go there, and it's not trying to fool anyone and no one will be fooled by it.
AI slop on a forum or social media or on facebook convincing boomers that a black person slapped a cop or whatever racist garbage they're being fed today? Fetch the guillotine.
AI slop as part of a dumb art project on somebody's personal website that isn't trying to manipulate or mislead? Have at it. Go nuts. It's your press, print as many pages of slop as you like.
So, I have exhaustively covered the possible arguments I can come up with for why this could be "actively harmful for the web", and rejected them outright.
That clarifies things much better than the original statement, but rejecting arguments you have conceived of which fail does not preclude the existence of those that do not, and thus the original question still remains.
It's probably only harmful to the AI scrapers that train from the web. Most people will understand the purpose of this -- to poison LLM training in a humorous way, which is really easy to do. It exemplifies a major weakness in modern day AI.
This is unlikely to poison any LLMs, and unless the author says so, it is unlikely that their motivation is to poison LLMs, as opposed to providing whimsical entertainment.
You could also argue that the web has failed and poisoning it into irrelevance is a vital service, motivating humans to collect knowledge into immutable sources. We‘ll call them ‘libraries.’
It would be difficult to have spent any time at all on this website in the past two years without hearing the arguments for why slop farms undermine trust online, poison future training data sets, worsen the signal to noise ratio and eat up untold resources.
When you get the something worse, the previous suddenly becomes much less worse. With the help of wrapping your memories with "remember when" nostalgia making things much more palatable, the something worse suddenly makes the previous better if not good.
I think there's an unexamined assumption here that "the next thing" is always going to be an improvement but there is no, non-ideological reason to hold to this assumption. Ideally, we would be actively working towards making it so but what often happens is passively riding the current and calling it "progress".
>unexamined assumption here that "the next thing" is always going to be an improvement but there is no, non-ideological reason to hold to this assumption
i'm not making that assumption at all, so whatever.
context: revolutions? if slop is a problem but is barely enough of a problem to collectively do something about it maybe letting it get out of hand would be a good motivation.
i'm not advocating for this, just providing it as a possible context where the "this is really bad so let's make it worse" argument could "make sense".
progress isn't just a technical issue, it involves people and people need motivation.
A web that is vulnerable to this would already be as good as dead.
As an entertaining way to highlight the importance of upgrading our ways of knowing, playful (& open-source!) projects like this are likely to strengthen the web.
To the web? It's fantastic for the web, these are the kinds of fun projects that make the web a worthwhile place to be. To slop generators? Yes, absolutely harmful, and that's for the best.
The page requires JS to load its content - user agents without JS support just get a blank page.
I'm not sure if the bots that scrape data to train LLMs are capable of loading that type of page, or if they only work on pages that have the content inside the HTML itself?
any serious scraping service these days will fail over to a headless browser when it fetches an asset referencing a js bundle that isn't verifiably a vendor script
They’re caching the pages which have already been generated. You could go back and delete all references to pages which don’t exist yet. Basically turn it into a static website.
It seems like the site's algorithm is that every newly-generate page includes multiple links to not-yet-existing pages. So it doesn't matter that existing pages are cached, all the "leaf node" pages link to multiple uncached new pages.
I love it. What’s the rough architecture of the system (using cloud LLM and paying $$$, or local)? The performance for new entries is really good. What is the prompt for each entry and how do you keep the steampunk vibe going?
One suggestion for improvement is avoiding creation of self referential links. For example https://halupedia.com/chaldic-arithmetic has many references links to itself.
Funny. Small improvement suggestion: the entry about "Glorbonian culinary arts" links to "the subterranean nation of Glorbonia". However upon clicking the link to "Glorbonia", an entry is generated claiming that "Glorbonia refers to a peculiar and largely uncatalogued form of sub-auditory resonance". It would be cool if some context were carried over from the referrer page so that there is some coherence between entries (ah, and some existing entries could be taken in account when generating new ones).
Feels like this will eventually cause collisions, although perhaps nothing multiple definitions of Glorbonia and multiple biographies of different Mrs Wiggles (perhaps with Wikipedia style disambiguation) can't solve
Btw, I've noticed just now that Glorbonia is, in the first entry, a "subterranean nation" and in the second it's a "sub-auditory resonance". So I got curious and I asked Opus what he thinks about the word Glorbonia: "Do you detect in the word a sense of place? North, south, east, west, up, down?". And Opus answers "Down, weirdly. Or maybe low — something subterranean, or at least sunken." Curious.
I'm curious about the design. Maybe you have a "how I did it" post coming soon, or something. One question: Did you find away to get some convergence, where a newly generated page will tend to cite pages (or stubs, at least) that already exist in the universe? Seems hard to do it with generated text, but not impossible.
Great idea! I created an adjacent website that gives, shall we say, "alternative facts" about your questions. (don't know if the rules allow me to link the site so I won't).
Currently breaks if you try to create a page with a Japanese slug. Multiple languages would make this an even more valuable resource than it already is.
Just incredible prose and writing (and gameplay), with something you can run with Frotz/NFrotz/LectRote or any ZMachine interpreter (or Glulxe like Gargoyle). A Pentium would run this and marvel you in a similar way.
I find the handling of NSFW topics (and how it avoids making them nsfw) really interesting. Eg https://halupedia.com/fuck (aside from the title it seems SFW to me)
The All Entries (https://halupedia.com/all-entries) part of the site is a bit alarming. I think OP might want to do a little bit of basic automoderation here.
In today's world it does not take long to be reminded that we cannot have nice things. Or maybe the gov't has their own bot army to wreak havoc and convince voters that actually, we really do want privacy-ending ID verification laws after all.
wtf, I thought these were just anecdotes until I saw they were actually happening in Astoria. I used to visit in the summers and never heard about any of that! Stop the fake news
As I said in another comment, this is brilliant. Suggestion: Remove anything that isn't part of the satire; act always as if it's a 'real' encyclopedia. For example on the front page I would remove,
> Articles are generated on demand and stored permanently upon first request.
Don't dispell the magic; don't pull back the curtain and let people see the mechanics.
EDIT: As you say in your system prompt, "You never wink at the reader. You never acknowledge that anything is funny or fictional. Everything is reported as though it is completely normal and well-documented"
This is irresponsible for people who don't get it, takes away confirmation for people who do get it, and makes me block/blacklist any liar who does it.
"Despite its failure, the Great Pigeon Census of 1887 is remembered as a cautionary tale..."
This type of writing is considered non-encyclopedic by Wikipedia standards as it injects superficial analysis. The imitation articles would look better without it. Maybe train on this article? https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing