I'm going to be honest, I was pretty geared up to have a contrarian opinion until I looked at the standards but they're actually pretty clear, a 404 could be a proper response to unexpected query string; query string is as much part of the URL API as the path is and I think pretty much everyone can acknowledge that just tacking random stuff onto the path would be ill advised and undefined behavior.
[0]: https://url.spec.whatwg.org/#application/x-www-form-urlencod...
In fact lots of sites still work like that, they just hide it behind a couple rewrite rules in apache/nginx for SEO reasons
On the other hand, if it's a CRUD app and you're filtering a list of entities by various field values? Returning that no items matched your selection (or an empty list, if an API) makes more sense than a 404, which would more appropriate for an attempt to pull up a nonexistent entity URI.
204 No Content
for nothing found is both not an error (because 2xx code) but also indicates there was nothing found to match the request.If it's an API, a 200 with an empty JSON object or array in the body is legitimate as well, but a 204 is explicit.
/books/1 could return 200 or 404 depending on the existence of the book#1, here it make sense because if /books/1 does not exist the API must tell it explicitly. However 404 belongs to the 4XX family which means "client error", is it an error to ask for a non existing book ? If you enter in a bookshop and ask for a book they don't have you did not "make a mistake". It's not like if you asked for a chainsaw. But in an API, especially with hypermedia, you are not supposed to request a resource that does not exist (unless the API provides a link to an existing resource that is was deleted before the caller try to reach it).
/users/ returns a 404 in an API means that this resource does not exist. As in, this is not a part of the API.
/users/123 returns a 404 means this user record does not exist.
Yes this means that a 404 is context dependent but in a way that makes it easier for a human to think of and reason about.
Lots of REST libraries that I’ve used treat any 400 response as an error so generating a 404 when for an empty list would just create more headaches.
Responses with status codes in the 400 range are client errors, so the client shouldn't retry the same request. So a 404 is appropriate despite how annoying a library might be at handling it. Depending on which language/ecosystem you are using, there are likely more sane alternatives.
Although I do feel like I've seen too many instances of a 404 being used for an empty collection where it would make more sense to return `[]` and treat it as an expected (successful) state.
It would have been nice if there was an actually grouping of retriable and not retriable but in reality it’s a complete mess.
But at a minimum beware of 429. That’s not a permanent outage and is a frequent one you might get that needs a careful retry.
That's not obvious at all. If I receive json data that contains a property I'm not aware of, i don't reject the entire document for that reason. In the case of query strings, extra query parameters might be used by other parts of the stack besides yours, so rejecting the entire document because someone somewhere else is trying to pass information to itself is the wrong approach.
As a web developer, you’re the like the guy standing with a clipboard outside a fancy club checking if people requesting entry are allowed or not. Basically, level 1 security.
If someone is not on the list, your job is to default to declining them access, not granting them access assuming level 2 security will handle them at a deeper layer.
It’s possible that the teams you work with expect fuzzy behaviour from the website but that’s a choice, not a practice.
This is how the vast majority of websites work. The practical reason is obvious: when we model the behaviour our code depends on, we want to create the simplest possible model that allows our code to work as expected. Placing requirements on it that our code doesn't actually depend on is useless, unneeded, complexity.
> As a web developer, you’re the like the guy standing with a clipboard outside a fancy club checking if people requesting entry are allowed or not. Basically, level 1 security.
there is no security benefit to filtering out unneeded url parameters.
there is - security in depth.
If a url parameter would've been a vulnerability because something lower down the stack misinterprets it (and the param wasn't necessary for your app in the first place), then you've just left a window open for the exploit.
If the set of url params are known ahead of time (which i claim should be true), then you could make adding unknown params an error.
What about passing extra data to fill the server memory with either extra known junk or a script / executable to use with a zero day in an internal component or something.
To misuse the nightclub analogy: it’s like checking for bags not being larger than A4 and disallow knives and other weapons.
Oh yeah? I remember a lot of semicolons from Perl and other CGI stuff where we would now use ampersands, back in the day, both in the path and in the query. (Sometimes the ? itself would be written ;.)
The really funny thing about this is that, when I was worrying about possible side effects if I responded 404, I somehow completely forgot how much of the web’s history the path has been useless for. Paths have won. No one really starts new things with URLs like /item?id=… any more. Yay!
So en.wikipedia.org/wiki/// is the article about C++ style comments
https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
effectively lets you specify what parts of a query are relevant. So for example
url?a=b&c=d matches url?c=d&a=b in terms of caching
Though there are “smart” CDNs that will resize images etc. all beats are off for those.
This feels like a technically correct is the best kind of correct situation. Like technically, yeah web servers may respond 404 if they dont understand a query parameter, but in practise that is not how urls are conceptualized normally.
Seems a lot better than the other potential world we could lived in, where paths were a black box and every web server/framework invented their own structure for them.
It’s your website. Have fun with it! Do dumb things! :-)
MII//epi
Is converted to MII/epi
- user gritzko,
- project beagle,
- view blob,
- commit a7e17290a39250092055fcda5ae7015868dabdb4,
- file path VERBS.md
... all concatenated indiscriminately.Grouping data by user is common and normal in computing: /home laid precedent decades ago.
Project directories are an extremely common grouping within a user’s work sets. Yeah, some of us just dump random files in $HOME, but this is still a sensible tier two path component.
The choice to make ‘view metadata-wrapped content in browser HTML output’ the default rather than ‘view raw file contents’ the default is legitimate for their usage. One could argue that using custom http headers would be preferable to a path element (to the exclusion of JavaScript being able to access them, iirc?) or that the path element blob should be moved into the domain component or should prefix rather than suffix the operands; all valid choices, but none implicitly better or worse here.
Object hash is obviously mandatory for git permalinks, and is perhaps the only mandatory component here. (But notably, that’s not the same as a commit hash.) However, such paths could arguably be interpreted as maximally user-hostile.
File path, interestingly enough, is completely disposable if one refers to a specific result object hash within a commit, but if the prior object hash was required to be a commit, then this is a valid unique identifier for the filesystem-tree contents of that commit. You could use the object hash instead of the full path within the commit hash, but that’s a pretty user-hostile way to go about this.
So, then, which part of the ordering and path selections do you consider indiscriminate, and why?
Query strings are more verbose as force to give each param a name.
edit: for instance, that specific VERBS.md is represented by the blob 3b9a46854589abb305ea33360f6f6d8634649108.
https://github.com/gritzko/beagle/a7e17290a39250092055fcda5ae7015868dabdb4/VERBS.md
this should be sufficient to represent the file."blob" is like a descriptor of the value that follows. it would be like doing this:
https://github.com/user/gritzko/project/beagle/blob/a7e17290a39250092055fcda5ae7015868dabdb4/file/VERBS.md
this actually irks me every time i see it in a github urlExcept it's not, because the oid can be a short hash (https://github.com/gritzko/beagle/blob/a7e172/VERBS.md) and that means you're at risk of colliding with every other top-level entry in the repository, so you're restricting the naming of those toplevel entries, for no reason.
So namespacing git object lookups is perfectly sensible, and doing so with the type you're looking for (rather than e.g. `git` to indicate traversal of the git db) probably simplifies routing, and to the extent that it is any use makes the destination clearer for people reading the link.
Back when GitHub URLs were kind of cool, github.com/user/gritzko/project/beagle would have been much less cool than just github.com/gritzko/beagle.
They are not. There's just a routing layer below the repository.
Of course there's nothing to stop you using URIs like this (I think Angular does, or did at one point?) but I don't think the rules for relative matrix URIs were ever figured out and standardised, so browsers don't do anything useful with them.
For sites without Javascript, it's great for things like search boxes, tables with sorting/filtering, etc. instead of POST, since it preserves your query in the URL.
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Or you could accept that you're probably going to need a round trip to the server and use a normal URL and it's fine.
For all but the absolute biggest websites in the world, anyhow. At Facebook or Google scale yeah it's needed.
So yes query parameters existed before CGI but to use them you had to hack your server to do something with them (iirc NCSA web servers had some magic hacks for queries). CGI drove standardization.
func specialHandler(w http.ResponseWriter, r *http.Request) {
if time.Now().Weekday() == time.Tuesday {
http.NotFound(w, r)
return
}
fmt.Fprintln(w, "server made a decision")
}
Your server can make decisions however you program it to, you know? It's just software.Forgive the phone-posting.
Paths are hierarchical; query strings are name/value.
(Note I speak of common usage.)
You can create a different convention, but that one is pretty dang useful.
How does this benefit the other website? How does this hurt the authors website?
I am completely confused about the behavior of both side here.
I get that when I run an ad-campaing I want google to add a utm-query string, so I can track which campaign users arrived from - but then the origin and the destination are working together. Here the origin just adds stuff for no reason. Why?
Honestly, it is quite useful for niche/startup sites. I have been on both ends of conversations that began from seeing these in web analytics (as someone that saw incoming traffic from a site and reached out, and as someone that received contact from a site I linked to) - and both times it ended in a mutually beneficial partnership.
I can understand the privacy argument to some degree, but it provides no more information than the standard Referer header (and if you use analytics like Simple Analytics/Plausible, it is a lot more visible).
Query string additions are commonly used to track things. You can see that lots of people don’t want that by the existence of Firefox features like “copy clean link” and Extended Tracking Protection which proactively strips some like UTM parameters.
Some sites happily participate in what I will glibly call the tracking economy. They may benefit because the recipient will see in their logs that lots of people are coming from their site, and do something that helps their site because of that.
My rejecting query strings is a simple form of protest against that system.
Some web pages don't send referrers by making all links rel="noreferrer". Mastodon used to do this by default, though now they've changed their stance.
Links opened from non-browser apps don't have any referrer information either. E.g. if somebody shares your link on iMessage, WhatsApp, or Telegram.
Email clients may also strip out referrers, but I'm not entirely sure about this one.
If people read your work via RSS readers, you'll almost certainly not get any referrers. Unless it's a web-based reader like Feedly.
My website gets a lot of traffic marked as "Direct / None" by Plausible. I suspect this is traffic from RSS readers or Mastodon, but I can't be sure. A few times I've considered adding a "?ref=RSS" to all URLs served to RSS readers and "?ref=Mastodon" to everything I post on Mastodon. But like the author of this post, I feel uncomfortable tracking my readers like this.
Back in the Stone Age, we called these “Webrings,” but they weren’t as fancy.
One of the issues that I faced, while developing an open-source application framework, was that hosting that used FastCGI, would not honor Auth headers, so I was forced to pass the tokens in the query. It sucked, because that makes copy/paste of the Web address a real problem. It would often contain tokens. I guess maybe this has been fixed?
In the backends that I control, and aren’t required to make available to any and all, I use headers.
So you were writing your application as a fcgi-app, and (e.g.) Apache was bungling Auth headers? Can you expand on this? Curious about the technical detail of (I guess) PARAM records not actually giving you what you expect?
I just remember the auth headers never showing up in the $_SERVER global (it was a PHP app). This was what I was told was the issue. They made it sound like it was well-known.
His site returns (I think incorrectly) a 414 if a request includes a query string. If this protest is meant to advocate for the user, who presumably wasn't able to manage that string in the first place, why would you penalize them for it being there?
Why not just use it as a cue to tell users how they can make this decision themselves (e.g. through browser tools)?
400 Bad Request, the generic client error code, which is correct but boring;
402 Payment Required, and honestly if you want to pay me to make a particular URL with query string work, I’m open to it;
404 Not Found, but it’s too likely to have side effects, and it doesn’t convey the idea that the request was malformed, which is what I’m going for; and
303 See Other with no Location header, which is extremely uncommon these days but legitimate. Or at least it was in RFC 2616 (“The different URI SHOULD be given by the Location field in the response”), but it was reworded in 7231 and 9110 in a way that assumes the presence of a Location header (“… as indicated by a URI in the Location header field”), while 301, 302, 307 and 308 say “the server SHOULD generate a Location header field”. Well, I reckon See Other with no Location header is fair enough. But URI Too Long was funnier."
https://chrismorgan.info/no-query-strings?fooObviously it's against the spirit of the thing, but I don't think it's wrong per-se.
>Complain to whoever gave you the bad link, and ask them to stop modifying URLs, because it’s bad manners.
It's ironic that an error response so blatantly violating the robustness principle is throwing shade about bad manners.
In our modern world, the robustness principle has become an invitation to security bugs, and vendor lock-in. Edge cases snuck through one system on robustness, then trigger unfortunate behavior when they hit a different system. Two systems tried to do something reasonable on an ambiguous case, but did it differently, leading to software that works on one, failing to work on the other.
That said, we are paying a huge complexity cost due to our efforts to allow nonconforming pages. This complexity is widely abused by malicious actors. See, for instance, https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Ev... for ways in which attackers try to bypass security filters. A lot of it is only possible because of this unnecessary complexity.
Another option to consider is "418 I'm a teapot": teapots usually also don't support query strings
Several options which seem like they might be appropriate aren't on close examination:
- "406" ("Not Acceptable") which is based on content-negotiation headers.
- "409" ("Conflict") which is largely for WebDAV requests.
- Others such as 411, 422, and 431 are also for specific conditions which aren't relevant here.
- 300 or 500 errors are inappropriate as this isn't a relocation or server-side failure, it's a client-side request problem.
Teapot or too long seem best bets.
I’ve always used them in API servers when a client was POSTing to create a duplicate of a unique item.
Im not making this up btw. A old NOC I woeked at emitted every error as 200 OK with the body message with the real error. They were a real shitshow.
The technical purist: you’re modifying a URL in a way that, while in line with accepted custom, is technically incorrect. URLs should (the least effective type of should) basically be treated as opaque.
Social: it’s tracking stuff, sibling comment trees are good, I won’t reiterate.
Clutter: it’s getting in the way of the bit you should care about, and contributing to normal people no longer caring about URLs because they’re too hard, too complex.
There are a lot of reasons I might not want a site to know where I came from to get to their site. It is basically sharing your browsing history with the site you are visiting.
Because of this, there have been a lot of updates to the http referer header, with restrictions on when it is sent, and an ability to opt out of the feature entirely.
Adding a url parameter with the same information bypasses any of these existing rules and ability to opt out. They should just use the standard.
This is talking about links to third party sites, not your own.
Isn't this functionally the exact same?
You could simply throw the information away.
It's a ridiculously extreme stance and lacks proper explanation how this will lead to a better web.
They aren't saying the concept of query strings are bad, They're saying unsolicited query strings during referal are the issue.
On a more personal note, I hate it when I go to copy a link to send via a message, and the tracking code glued onto it is twice as long as original URL... I either have to fiddle around with it to clean it up or leave the person I sent it to to wonder wtf am I on about with a screenful of random characters...
So it's violating users' privacy, it's shit UX, and on top of that, nobody asked for it...
Query strings are useful for way more than just tracking. Saving and servicing search queries is a way more common use case. So assuming it's only useful for tracking is very misleading.
Query strings are probably the least invasive tracking. They are transparent, obvious, and anonymous. Users are free to strip out and edit query strings if they don't want them.
More to the point, I can essentially do the same thing with HTTP routing - create an infinite number of unique URLs for tracking purposes. In that regard calling out query strings specifically for essentially the same thing but more transparently seems like splitting hairs.
Filters especially make sense as query params as they are non sequential but still visually readable as to what they do.
URL slugs make sense for sequential pages that are hierarchical but make no sense for non hierarchical data/routes.
Services can force tracking into links by encoding the whole url into a shortlink that makes it impossible to just remove the tracking alone as everything is encoded into a shorter non editable string.
If I am handing out maps to your address, letting people know who is publishing the map is generally a good thing.
This is like saying having a return to sender address on mail is an invasion of privacy.
Both are good but it seems fair to give priority to the original.
I actually get a lot more annoyed by routing parsers that do the same thing Get requests do only by pretending to be a real URL.
I think 404 probably makes the most sense as the response if a query string is not expected but is present anyways, although 400 might also be suitable.
I use this bookmarklet to strip query params before sharing a link:
javascript:(()=>navigator.clipboard.writeText(location.origin+location.pathname))(); https://example.com/?p=20&utm_source=spam
to: https://example.com/
when in fact we want the following: https://example.com/?p=20
A possible improvement can be: javascript:(()=>{const u=new URL(location.href);[...u.searchParams.keys()].forEach(k=>{if(k.startsWith('utm_')){u.searchParams.delete(k)}});navigator.clipboard.writeText(u.href)})();https://chrismorgan.info/no-query-strings?
Never have I seen such a sassy web server
I noticed that his server also doesn't accept URLs ending is a single `/`: https://chrismorgan.info/no-query-strings/
But instead of the banned query strings message, it just returns a very sassy not-a-404 page. Once again, this is violating a common convention, but there's nothing in the HTTP spec that requires treating these URLs the same. Similarly the site also 404s when you add extra slashes like https://chrismorgan.info///no-query-strings
digression: I love trying "domain.com//" on various sites. Occasionally it'll trigger weird errors like a 502 or 500.
Where dealing with static file servers:
For URLs that are supposed to include a trailing slash and the server will find that directory and serve its index.html: it’s customary, though not ubiquitous, to redirect from no-slash to slash. (Some, including popular commercial services, serve the index.html file instead of redirecting to add the slash. This is extremely wrong because it changes the meaning of relative URLs.)
But the other way round is not common.
My URLs don’t include a file extension, and I think that’s influencing your perception into thinking no-query-strings is logically a directory name. But it’s not, it’s logically a file name, just with the .html removed as unnecessary.
Take https://susam.net/no-query-strings.html as an example; Susam is more clearly just serving from a file system than I am, and leaves the “.html” file extension in the URL. Do you expect https://susam.net/no-query-strings.html/ to work? I hope not. It’s a 404, just as I’d expect, because there is no directory with that name.
> not-a-404 page
No, that’s a 404, just a plain old boring 404, same as any other. In fact, it’s the same 404 page I’ve been using since 2019, just with dark mode support added.
> extra slashes
Ah, now for that I had to go out of my way, because Caddy misbehaves out of the box: https://chrismorgan.info/Caddyfile#:~:text=%40has%5Fmultiple...
> digression: I love trying "domain.com//" on various sites.
Closely related is adding the trailing dot of a fully-qualified domain name: https://example.com./. I didn’t remember to try this on my new site, but it turns out Caddy won’t talk at https://chrismorgan.info./, so that’s probably good.
Facebook: no.
Pinterest: ?utm_source=Pinterest&utm_medium=organic.
ChatGPT: ?utm_source=chatgpt.com. (Aside: wow it’s confidently and atrociously wrong if you ask it about me. Ask it just vaguely enough, and it hallucinates someone clearly inspired by me, but who has done a whole lot of stuff that I haven’t. Ask it more precisely about me, and it gets all kinds of details wrong still. I feel further vindicated in hating this stuff. You made me use ChatGPT for the very first time.)
LinkedIn: no.
Twitter: no.
Reddit: no.
YouTube: no.
> if you get enough traffic that you can pick which sources you want to allow, that's a good problem to have.
Nah, I just don’t care about them. It’s my place, I’m doing things on my own terms. Should I discover it to be causing me problems, I’ll burn that bridge when I come to it.
Edit: Perhaps it only mangles links for logged-in users? That raises the possibility that some of the others may also only affect logged-in users.
(Trying with other ones I'm logged in on: Reddit doesn't mangle (obviously), Twitter doesn't mangle.)
"I don’t like people adding tracking stuff to URLs" and "You abuse your users by adding that to the link" and "no unauthorised query strings" and "At present I don’t use any query strings" but for some reason ?igsh, which i'm pretty sure is an instagram tracking parameter, is allowed. weird
> Is it not a random walk? Might sound pedantic but if there is graph structure I am interested.
The network is a directed graph. Every Wander Console declares a few other consoles as its neighbours. The person setting up the console decides who they want to list as their neighbours. So if we call the network graph X, then the set of vertices is:
V(X) = the set of all URLs that point to Wander Consoles
and the set of directed edges is: E(X) = {(u, v) in V(X) : u declares v as its neighbour}
The traversal between consoles is not strictly a random walk. If I could call it something, I would call it randomised graph exploration with frontier expansion. On each click of the 'Wander' button, the tool picks one console at random from the set of discovered consoles and visits that console. It then fetches the neighbours declared by that console and adds any newly discovered consoles to the set.The difference from a random walk is that the next console is not chosen from the neighbours of the last visited console. It is chosen from the whole set of consoles discovered so far. In other words, each click expands the known part of the graph, but the console used for that expansion is selected randomly from all discovered consoles, not just from the last console visited.
Instead of responding with an error, give a page that states “The link you followed to get here appears to have had some tracking gubbins added, in case you are a bot following arbitrary links, and/or using random URL additions to look like a more organic visit, please wait while we run a little PoW automaton deterrent before passing you on to the page you are looking for.” then do a little busy work (perhaps a real PoW thingy) before redirecting. Or maybe don't redirect directly, just output the unadorned URL for the user to click (and pass on to others). This won't stop the extra gubbins being added of course, but neither will the error and this inconveniences potential readers less.
>Want to share an amazon product on a chat to discuss about it. I would have liked a nice short url that I can copy, instead I get a monstrosity, it forces me to manually select only the id portion of it if I want to share it.
A link that is "https:// web.site" is fine.
A link that is "https:// web.site?via=another.site" is fine.
A link that is "https:// web.site?fbm=avddjur5rdcbbdehy63edjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63edaaaddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednzzddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63ednddjur5rdcbbdehy63edn"
is annoying as shit and I need to literally apologize to people after sending it if I forget to manually redact the query string. Don't abuse this.
https://www.google.com/search?newwindow=1&sca_esv=8061bd9cb1...
Edit: which luckily and sensibly Hacker News cuts short since it's 463 characters
Since the purpose is to show the full URL with trackers and other cruft, that's sensible here:
https://www.google.com/search?newwindow=1&sca_esv=8061bd9cb19cd450&sxsrf=ANbL-n7S60ZBdf0lh5kQ8RojJdQpnM0S5w:1778353180297&q=clearurls+addon&source=lnms&fbs=ADc_l-aN0CWEZBOHjofHoaMMDiKpeTF8ggB1qASWZfpybz5TQZmqMiWOgtbP_iLwZE3_BsqFrIkjQk30pNpcyOJjgYT1NYhSr_eVWusunSdIYLAa1WWhJm7VPvRsNUkHss5YZDSVhzEth7KnRsP0kwdL-3ylxxDz_j5WL-QtjJdzQePIWAeCwn7532w9WuSzSqnY0V2tn342eEk_wDwxk45MDY_JuA-5CA&sa=X&ved=2ahUKEwjH3uLs8ayUAxUghP0HHVXuOeIQ0pQJegQICxAB&biw=1296&bih=711&dpr=2.22
And yeah, that's pretty awful.In conclusion, Google must be destroyed.
https://support.mozilla.org/en-US/kb/enhanced-tracking-prote...
You can’t just send arbitrary query string parameters to a server and assume they will just ignore them. Just like you can’t just remove query string parameters and assume the URL will work.
Most sites don't mind or break, some sites get value from the behavior in ways hard to replicate in other ways – and those sites that don't like such additions can easily ignore them. And a few lines of code will work better than ineffectually appealing to manners, when the freedom of the web's form of hypertext, and protocols, gives the outlink authors full freedom to craft URLs (and thus requests) however they like.
You’re handing out someone elses’s contact details, but giving the person you hand them to a completely fabricated expectation for how the interaction will go.
Example.com/interesting -> bookmark folder one
Example.com/interesting?dummy=t -> bookmark folder two
Right on! It's so liberating having your own wee corner of the internet.
https://chrismorgan.info/no-query-strings#:~:text=So%20I%E2%...
but this one was too long:
https://chrismorgan.info/%6e%6f-%71%75%65%72%79-%73%74%72%69...
It uses 4xx, but not just 400 :)
I build a lot of internal applications, and one of my golden UI rules is that a user should be able to share their URL and other users should be able to see exactly what the sender did.
So if you have a dashboard or visualization where the user can add filters or configurations, I have all of their settings saved automatically in the URL. It's visible, it's obvious, it's easy, it's convenient.
>There is also a moral question here about whether it is okay to modify a given URL on behalf of the user in order to insert a referral query string into it. I think it isn't.
These dogmatic technical screeds are all so weird to me. They usually reveal more about the authors lack of experience or imagination than provide a useful truism.
> If I wanted to know I’d look at the Referer header; and if it isn’t there, it’s probably for a good reason. You abuse your users by adding that to the link.
The reason is that the referrer headers are a usability and privacy nightmare. It's weird for the author to jump to such a conclusion.
This referral information is being done purely as a courtesy to the webhost. If we imagined a world in which ChatGPT or Wikipedia launched massive hugs of death on referral links without attributing themselves, that is a much, much worse outcome.
Example: The Browser is a well known link aggregation paid periodical. I subscribe, and every 1 in 10 or 20 links I clicked, it'd just break outright and I'd have to tediously edit the URL to fix it (assuming the website didn't do a silent ninja URL edit and make it impossible for me to remember what URL I opened possibly days or weeks ago in a tab and potentially fix it). This was annoying enough to bother me regularly, but not enough to figure out a workaround.
Why? ...Because TB was injecting a '?referrer=The_Browser' or something, and the receiving website server got confused by an invalid query and errored out. 'Wow, how careless of The Browser! Are they really so incompetent as to not even check their URLs before mailing an issue out to paying subscribers?'
I wondered the same thing, and I eventually complained to them. It turns out, they did check all their URLs carefully before emailing them out... emphasis on 'before', which meant that they were checking the query-string-free versions, which of course worked fine. (This is a good example of a testing failure due to not testing end-to-end or integration testing: they should have been testing draft emails sent to a testing account, to check for all possible issues like MIME mangling, not just query string shenanigans.)
After that they fixed it by making sure they injected the query string before they checked the URLs. (I suggested not injecting it at all, but they said that for business reasons, it was too valuable to show receiving websites exactly how much traffic TB was driving to them on net, because referrers are typically stripped from emails and reshares and just in general - this, BTW, is why the OP suggestion of 'just set a HTTP referrer header!' is naive and limited to very narrow niches where you can be sure that you can, in fact, just set the referrer header.)
But this error was affecting them for god knows how long and how many readers and how many clicks, and they didn't know. Because why would they? The most important thing any programmer or web dev should know about users is that "they may never tell you": https://pointersgonewild.com/2019/11/02/they-might-never-tel... (excerpts & more examples: https://gwern.net/ref/chevalier-boisvert-2019 ). No matter how badly broken a feature or service or URL may be, the odds are good that no user will ever tell you that. Laziness, public goods, learned helplessness / low standards, I don't know what it is, but never assume that you are aware of severe breakage (or vice-versa, as a user, never assume the creator is aware of even the most extreme problem or error).
Even the biggest businesses.... I was watching a friend the other day try to set up a bank account in Central America, and clicking on one of the few banks' websites to download the forms on their main web page. None of the form PDF download links worked. "That's not a good sign", they said. No, but also not as surprising as you might think - the bank might have no idea that some server config tweak broke their form links. After all, at least while I was watching, my friend didn't tell them about their problem either!
In fact, the example seems to suggest the opposite: a 17+ year successful paid subscription business – to which you appear to be a generally-satisfied customer! – receives enough "business value" from the practice, despite its failure modes, they don't want to stop. Improving their probe of the risk-of-failure was enough.
Seemingly, the practice works often enough, pleasing more destination sites than it angers, that "referral tracking" is not something "so minor".
The point was it was dangerous in a way they didn't even realize was an issue, for a thin business rationale. Unless you are going to do thorough tests and understand the risk you are taking (which they did not, as evidenced by screwing it up systematically at scale for years), you should not be doing it.
And it's not obvious that they are correct in their tightened-up testing, because even if a link is correct at the time they test it, it could break at any time thereafter.
> to which you appear to be a generally-satisfied customer!
No matter what _X_ is, _X_ would have to be a pretty epic screwup to make a customer unsubscribe solely over that! I never claimed it was such a major epic screwup that it could do that. So that is an unreasonable criterion: "well, you didn't outright quit, so I guess it can't be that bad." Indeed, but I never said it was, and somewhat bad is still bad; I was in fact fairly annoyed by the random breakage, and at the margin, everything matters. If TB did a few other things, in sum, they could potentially convince me to let my subscription lapse. An annoyance here, a papercut here, and pretty soon a generally-satisfied customer is no longer so satisfied...
Ensuring both sides of a hyperlink agree/consent was a design flaw that limited the uptake of pre-web hypertext systems. The web's laissez-faire approach demonstrated a looser coupling was far better for users, despite all the new failure modes.
Of course any site/server has the practical power free to treat inbound requests as rigorously (or harshly) as they want. But by the web's essential nature, it is equally part of the inherent range-of-freedom of outlink authors to craft their URLs (and thus the resulting requests) however they want. URLs are permissionless hyperlanguage, not the intellectual property of entities named therein.
Plenty of sites welcome such extra info, and those that don't want it can ignore it easily enough – including by just not caring enough about the undefined behavior/failures to do nothing.
Though, when a web publisher has naively deployed a system that's fragile with respect to unexpected query-string values, they should want to upgrade their thinking for robustness, via either conscious strictness or conscious permissiveness. Thereafter, their work will be ready for the real web, not a just some idealized sandbox where scolding unwanted behavior makes sense.
No qualms with OP, your site your rules.
umm what? I don't know what they're actually sending where they think this, but if you think curl is broken you should re-think that maybe you're the one doing something wrong.
Here are some examples showing curl not stripping question marks (obviously), I am very curious what this person was actually seeing
$ curl -s 'https://httpbingo.org/get?' | jq .url
"https://httpbingo.org/get?"
$ curl -s 'https://httpbingo.org/get?path' | jq .url
"https://httpbingo.org/get?path"
$ curl -s 'https://httpbingo.org/get?path,query=bananas' | jq .url
"https://httpbingo.org/get?path,query=bananas"
$ curl -s 'https://httpbingo.org/get????' | jq .url
"https://httpbingo.org/get????"
$ curl -sv 'https://httpbingo.org/????' 2>&1 | grep :path
* [HTTP/2] [1] [:path: /????] $ curl -s 'https://httpbingo.org/get?' | jq .url
"https://httpbingo.org/get"
This may require further investigation. $ curl -s 'https://httpbingo.org/get?' | jq .url
"https://httpbingo.org/get"
But on macOS + Bash/Zsh + curl 8.7.1: $ curl -s 'https://httpbingo.org/get?' | jq .url
"https://httpbingo.org/get?"
I see some related changes here: https://github.com/curl/curl/commit/3eac21dThough I forget if any shell does stuff like that in quotes. Or printing oddities.
It also makes me wonder what other noxious online behaviours might be addressed through ... creative ... client-side responses similar to this.
We've already seen, for years, sites attempting to socially-condition people over the use of ad-blockers and Javascript disablers. No reason why the Other Side can't fight back as well.
Yes, let's unilaterally decide that query strings are bad because one website (ab)uses query strings to load different fonts.
It's the query strings that are the problem, not the website!
jfc.
Look, I'm against utm fragments as much as the next guy, but let's not throw away a perfectly good thing because tracking is evil.
Really not abusing abusing query strings from a standards perspective, a 404 is not an improper response to an unexpected query string
they added these ugly qses into every click on their site, bonkers: ?ref_=nm_ov_bio_lk
> And you can do what you want with yours!
That does not make a lot of sense. Yes, you can do what you want with your website, but query-string is a way for users to query for additional information or wants or needs. I use them on my own websites to have more flexibility. For instance:
foobar.com/ducks?pdf
That will download the website content as a formatted .pdf file.I can give many more examples here. The "query strings are horrible" I can not agree with at all. His websites don't allow for query strings? That's fine. But in no way does this mean query strings are useless. Besides, what does it mean to "ban" it? You simply don't respond to query strings you don't want to handle. We do so via general routing in web-applications these days.
That's not at all what the article says. You're responding to a weird strawman that doesn't resemble the article's actual point.
This isn't relevant when talking about links to his site. This is relevant when talking about links to your site.
> Besides, what does it mean to "ban" it? You simply don't respond to query strings you don't want to handle.
It means that you're going to get some sort of 400 error when you follow a link to his site with a query string attached to it. He simply will not respond to query strings that he doesn't want to handle, which is all of them.