Not only do you know where you are in the tree, but it is also discoverable: by using index.html default pages and your web servers directory indexing code, you can have a very flexible resource that is easy to navigate.
I think I prefer a middleground. I like Sivers' short URLs but I'd go with a word rather than just letters. Letters seems like a step too far for my preferences, looks untidy.
In real reality I'd probably use both methods. Long SEO-delicious URL for the actual article, short redirect to it, best of both worlds. Use whichever is appropriate for the medium (preferring to link online to the non-redirect for less future hassle)
Your hierarchy won't necessarily help others. For URIs that leak into the UI/UX to be short is almost certainly good advice. You can always have them be redirects to the hierarchical names you would prefer.
Don't believe me? Twitter, for example, has short, meaningful URIs for user's status pages, and long, meaningless-to-users URIs for tweets. And in the sea of tweets there is no hierarchy in their naming.
I think it comes down to a mix of (a) how people (and robots) organize and find information (b) tool limitations and genericity.
Regarding (a),I personally like the view that the content lives in a "flat world" on top of which we collate different structures to organize/filter a same set of contents. In that worldview, web users entry-points can be more than directory-listings. A great inspiration is how Wikipedia offers a way to find which articles use a given picture: each picture acts like a "category", the same way that "recent-changes" is another filter on the same "flat world" of articles.
However, what is immensely difficult is to standardize these in a world where de-facto implementations burgeon, flourish, and eventually becoming out of touch (e.g., site-maps, RSS, OpenGraph). Hence we are stuck with very limited but very generic tools (b) for which rules like "directory-listing on the slash-separator" or "generate a JSON of the whole site connections to display as an interactive graph" (which I do on my personal blog) merely are local work around which require a bit of duck-tape to work.
A poor exemple doesn't make for a poor argument.
I like hierarchical URLs because they allow me to try and find more content by stripping the end of the URL fo see what is up the hierarchy.
This sounds more like a workaround to a website with poor navigation and UX, but ultimately you are still at the mercy of the website, eg if they have /articles/<title> links but throw 404 when you try to access /articles directly. It's up to the website to choose how navigation works, and I don't think we should constrain the structure of website urls just to support these hacky workarounds.
Now there are other reasons for hierarchal urls mentioned in the thread, like providing better semantic meaning when representing content that is already hierarchal. Though I find that that is rarely the case. For example, you might think to represent blog posts like myblog.com/articles/<title>. But you could also do it like myblog.com/<article year>/title (eg myblog.com/2017/why-choose-short-urls). I've seen both formats, and both are valid to a certain degree. So choosing one is a bit arbitrary. But if you allow both formats, that just pushes the ambiguity onto the user. For example if the user wants to bookmark the article, which url do they bookmark?
From my experience, data follows graph structures, so trying to force a hierarchal structure and doing things like encoding paths in urls, feels too arbitrary and adds unnecessary ambiguity.
Url hacking is a clunky and extremely uncommon method of navigation. It should be the least of the website's concern. We don't design our doors based around the 0.01% of people that get in by kicking them down.
I agree. Also, images that are meant to be included in your articles can still have long urls if you want. Unless those images are urls you want to share, of course; it's about sharing urls, and that's much easier to do when they're short.
If you've got a CMS, I suppose you could do both: store each article in its hierarchical place, but also give each article (or each article you consider important enough to share, but I would hope that's all of them) a short name through which it can easily be remembered and shared.
I believe this is a rather naive posts on the subject of URLs and honestly wish it never made it to HN front-page, but seeing as we are here... lets go through the points made and how they conflict with each other...
"You can remember them"
Well, no. You might remember a few. But you are only going to remember so much. This has a very limited benefit.
"You can tell someone"
Yeah, but no. I barely tell people a domain. I'll just send them a link on the spot. Again, very limited benefit.
"They look nicer."
Ok, but not a strong use-case. In general usage the display name of a link does not need to reflect the URL of the link.
"They remove the middle-man" ... "encourage people to copy and paste" ...
I'm just gonna say this is plain false as a generalization. I've worked on URL sharing in publishing for almost 2 decades (yikes).
It's built on a little bit of truth. On mobile devices all have built in share functions to text and spread based on various apps installed.
Again, very limited impact and effect.
"They’re enough."
This is my favorite! He talks about unique combinations but this goes against his original point of making them memorable. He should of at least talked about potential dictionary level word combinations... sigh
Yes to all of that. Plus the url of the post itself (https://sive.rs/su) takes this to a perverse extreme.
I agree that URLs should be humane, but typeable from memory is not the primary objective. Having people beable to read the URL and understand something about where it will go is.
Also, unless you know your domain will only ever host one type of content, then having no hierarchy will lock you in. https://sive.rs/blog/on-short-urls is plenty short, but so much more valuable to any humans using the url.
If you don't agree that URLs need to be humane, then none of it matters and sure, use a UUID for every page.
URIs are not supposed to be part of the UI/UX, but they unavoidably have leaked and will continue to leak into it. Therefore, making those URIs most likely to leak into the UI/UX be user-friendly is very good advice.
Because if you have enough of them for internal purposes then they can't have meaning to users.
> It seems like the opposite has happened. URIs were the default navigation UX, but we are slowly pushing them away.
URIs were the default entry point for navigation, and they still are. That's one way in which they leak, but these are intentional so maybe best not called "leaks". There's also the URIs that leak in other ways, like in the browser status bar, and so on.
Well, thats the thing, most of the advice has a bit of truth to it.
However it is very limited benefit to the extent that the larger argument/conclusion (shorter URLs are better) is wrong.
It ignores many valid points raised here, like URLs that include structure (be it date-based, or taxonomy, or hierarchy) provide immense value in many cases.
I own a 3-letter domain with a 2-letter TLD and still spells my family name. Many of my friends finds it cool when I share links via that domain. The links are indeed easy to remember but it comes at a higher yearly cost of renewal and I had forgotten to renew twice. Luckily, I was able to re-register them (nobody wants it, ;-)).
I used to have a mail address along the lines of d@nblows.com
But beware, single letter local parts are not universally supported by web sites.
Microsoft accounts with those mail addresses are possible (they used to be impossible), but recently I stumbled upon two other web sites that didn't like single letter local parts.
I use an email address with a single-letter username (m@...) and have never had a problem with services rejecting it. The only problem is that rarely when I sign up for a service using a random password like gAdlzIBVom4j3Paf, it tells me "your password cannot contain your username" because it contains the letter "m". Hah!
Haven't had that problem before, far more of a problem when I try to register with site specific email addresses like d+hn@nblows.com which I've tried to get in the habit of doing.
Some mail providers (eg Gmail) use the plus sign as a label. So your everything still gets routed to d@nblows.com but tagged "hn" or "twitter" or whatever.
It's quite a convenient way to have site specific addresses while still only having a single mailbox to manage.
I get told that my email is invalid by some apps, for using a .so domain. Apple was one that would not let me create an apple ID upon buying a new laptop. Eventually, I set up the machine without Apple ID, and opted to create it after the fact via apple.com which accepted it. It makes me wonder if it is intentionally a "Hard no" during system setup, and then a "well okay if you insist" when the user has installed the system without using an Apple ID as a last resort.
Yes, I did that single letter email way back around 2007-2008 and had it with my Bank (closed account for a company). Now, I can neither change nor login (not valid to them), and I have set a specific filter just to ignore that email from the Bank that keeps sending me newsletter and offers.
I don't use the short oin.am as my primary (which is oinam.com), I have it as a alias domain, allowing me to quickly say/hand-write my email to someone and continue the conversation from my main domain -- say, ß@oin.am.
I literally had a bank tell me my email was invalid today. It was 2 letters before the @, and I think that's why. I'll probably come up with a longer one that I like (I own the domain) later and try again, but it just felt incredibly lame for them to reject my email like that.
>> "So why did we switch to delicious.com? We’ve seen a zillion different confusions and misspellings of “del.icio.us” over the years (for example, “de.licio.us”, “del.icio.us.com”, and “del.licio.us”), so moving to delicious.com will make it easier for people to find the site and share it with their friends. Of course the old del.icio.us domain and all its URLs will continue to work. Also note that the domain change requires a new login cookie, which is why everyone has to log in again."
I wonder if the parent was being sarcastic. I could never remember where the dots went; I suspect I wasn't the only one, and that was probably why they end up changing to delicious.com.
Reminded me for some reason of mikerowesoft.com - believe the story is that Microsoft sued the guy for infringement and ended up winning. Let's hope a tech company with the name Danblos or something doesn't blow up
When changing ISP, I gave ispname@myname.fr and the dude got confused and couldn't understand. He wanted to call his supervisor to ask if I was allowed to do that.
I think you're missing that the feature we're talking about is specifically within the $5/month option, in much the same way that ProtonMail (the one I use), also hides their catch-all option behind a higher price-point.
I'm also fairly sure that they don't let you reply as the address the catch-all was placed on, which is an important feature.
True, I missed that "custom domain" isn't part of the lowest tier.
The point still stands though, if you have a mail provider that supports custom domains there usually isn't a limit on the catch all aliases you can add (or a very high one). In any case it's far from being "prohibitively" expensive.
> I'm also fairly sure that they don't let you reply as the address the catch-all was placed on
That is definitely possible with Fastmail as that's how I use it. In their case I believe that feature is called "Alias". If you reply to an email you just select with alias you want to reply from.
I use Cloudflare Email Routing. Only problem I currently have is if I don't think ahead, I might have to setup the alias on the spot (probably should setup a catch all to make sure that it doesn't just drop them). And that redirects stuff to generally anywhere (though I mostly have it setup to redirect to an outlook.com family plan, which is already paid for for other reasons).
I do something similar with the old Gmail for Domains product (or whatever it was called before Workspace) - it lets me add real mailboxes but also have a catch-all, where everything else gets delivered to a specific address.
I know this is not available any more so I've done the same with Cloudflare Email Routing which lets you set up a catch-all and is (still) free.
Another happy owner here of a 3-letter domain with a 2-letter TLD: rio.hn. I can spell my name, Dario, as d@rio.hn. Lots of people laughs when they realize it's my name, they find it clever.
How many people type the whole URL and how many access it over Google, QR code, or from other site? This is good if content is unique. I find URLs with dates useful when I want to quickly assess how fresh is the content.
I often find articles that don't have the date at all, and the information turns out to be really out of date. Is hiding the date some kind of SEO trend?
> I often find articles that don't have the date at all
I hate this trend, one theory I heard was that it helps the content appear "ever-green".
Another theory is that it helps hide inactivity of content updates; like 100s of articles will be produced in a short span of time and then nothing for months or even years.
But doesn't their crawler keep track of when a page first appeared, and when it's updated? I wouldn't expect just removing the date to work in this regard, but I don't know.
> I often find articles that don't have the date at all
At the same time I also always find articles that have an "Updated at" timestamp of a few days ago, I guess that's somehow done automatically to gain some recency points for SEO, not sure if that or removing the timestamp is more annoying.
I'm not sure I would want to have all my writing in a single directory, but I guess you can't argue with 20 years of longevity ... on both sivers' site and PG's.
I think the motivation for using HTTPS with static content is to prevent man-in-the-middle attacks that inject advertising or malware into pages. I have read about governments, ISPs and hotels doing that sort of thing.
One example of what can be done is to cause the users to DDoS a third party.
The man-in-the-middle attack potential other replies have mentioned is possible, but my reason is far simpler. Defaulting to https for everything removes the cognitive load of having to decide whether to trust a website and pushes users to believe everything should be secure by default. The environmental impact is, in my opinion, worth it.
http is still susceptible to man in the middle attacks even if the site contains no secrets. You have no guarantee that the contents of the website haven't been tampered with (not that attackers would have much incentive to tamper with a blog).
That's pretty much my point, there is no reason to meddle with the contents here. It's different if you live in a country where your ISP automatically MiTMs you (as a sibling comment to yours pointed out), but that's not a thing here at least. If you know that's a threat and it bothers you, you would likely already have measures in place anyway?
Depending on how much your write, URL collisions can naturally become a real problem. If you use a SSG then all your post HTML documents will be in the same directory, forever. This isn't a problem if you don't spit out posts, but if you post a lot your directory can become rather unwieldy. I'd say, at least throw a year in there: /2022/su/.
This is indeed a big problem I faced when I moved from WordPress to Jekyll. I decided to segregate them by year (very unique); so my articles/posts are /yyyy/foo-bar/. I did decided on the URL loooong back in WordPress as that was the most sane thing I could do while still writing a lot but have some segregation. E.g. There are about 93 files/posts in the "2006" folder for my website.
Quick, what’s the URL of this discussion? Will you remember it? Does that have any impact on your usage of HN?
The vast, vast majority of website sessions start with a click. When people bother to type things in, they type them into search bars (reminder that in most browsers, the “address bar” is also the search bar).
So be clear what you want from your URL structure. If you want search traffic, you’d do better to organize it for Google (hierarchical topic-based) than making it short.
If you want to be able to say it or print it for easy typing, a domain name is going to be best. Catch it and redirect it to the target page.
Even “domain.com/word” is going to be hard for most people to remember. If they remember it at all, they will probably type in “domain word” and let Google figure it out for them.
I’m getting old because this reminds me of a time when I built a Url shortener years back. We needed about 30m unique short urls and at the time 3rd parties were expensive and a bit annoying for our use case. The project was successful but a few bumps in the road were encountered ie when we came up with the simple solution of the urls being their autoincrement IDs base…i think we used 40?…encoded we didn’t consider that sometimes the results might be offensive words. These urls were being used for referral links, and sending someone www.shorturl.com/cunt didn’t go down so well. We essentially had to sensor certain integers out to prevent this.
I do think this is a good idea in some situations, but the author doesn't get into the advantages of the style they dislike. Let's look at that path:
/blog/2022/05/08/short-urls-why-and-how.html
I can tell it's a blog post, I can tell when it was written, and I can see enough of the title that I can remember if I already read it. The ".html" doesn't add anything though.
Personally, I find putting the title of the post (but not the date) in the path is a good compromise:
This way people can still see the approximate date, while also ruling out the small chance in the future I'd want a post with the same title again (can't think of why, but I didn't want to rule out the possibility).
I really appreciate that Blogspot and Wordpress sort of standardized putting the date in the URL, I appreciate being able to spot dates in blog posts at a glance. Yes, your writing may be timeless (hopefully), but to me being able to know when something was written is important most of the time.
I think it is a good idea in some cases, but not for everything. Keeping the blog posts under the blog/ route doesn't seem like digital pollution to me.
Can you? I can remember some domains, but I can't recall many full URLs that I remember, except perhaps some that I use almost daily. Certainly not a blog post that I'll read only once.
> You can tell someone. You can even say it out loud!
If the other person is going to write down the URL, I'm not sure there's a difference here. Giving someone your email address over the phone is already a painful experience no matter what. Always have to make sure the dots are in the right place. And if the first point is wrong — "You can remember them." — then making it a little shorter over the phone isn't all that advantageous, because they're going to write it down or type it anyway, not memorize it.
The issue I see with this is that it does not allow for genericity.
URLs on sites with a ton of content often have a regularity to them. For example you might want to have different types of articles, say rooms, people and towels. For each article type you might want different affordances that usually have overlap, like an index, an input form, an activity feed...
URL paths are a good way to encode that kind of regularity.
Of course on a blog of a single person it’s whatever.
Similar: The variants of „Just write HTML!“ posts that come up every once in a while.
The "They’re enough" is ofcourse a little silly. Yes there are thousands of combinations if your short url contains any character. But his example is about words and that will leave you with less combinations.
Exactly. Unless your blog is incredibly broad, you're probably writing about the same subject more than once. What is the author going to do when they write about short urls next time? /su2?
I think I'd prefer it if you were creative with the content of your posts rather than your urls :) I mean, if you think there's no value in the content of the url whatsoever, then just start at /1 and work upwards...
If those hierarchical directories actually work, i.e. http://example.com/blog/2022/05 actually lists blog entries written in May 2022, I'd certainly prefer that to this type of "optimization".
The only use-case I've found for short URLs is in media broadcasts (or similar). It's the only scenario I find that shortening is necessary, as the audience will have to actually type out the URL. In text formats, you generally have access to hyperlinking text to the extended URL.
I created a link shortening service that works in exactly this way, it's called pxl.to. Based on Amazon CloudFront's global edge network of 310+ Points of Presence in 47 countries. What this means for the user:
- Instant links with low latency no matter where your visitors are in the world
- Robust and reliable links that will never go down, ever
- Protection against network and application layer attacks
- Unlimited tracked clicks (no cap on clicks/month)
The service allows the creation of links using a hierarchical naming structure, as mentioned in these comments like example.com/folder/test.
It also auto-generates an SSL cert for every custom domain added, for HTTPS-secure links.
I'd love any feedback! The service is largely free and can be found at https://www.pxl.to
I use mydomain.ext/long-descriptive-name for SEO but have my own url shortener, with.id/something (with.id is my short domain, and very "generic" name). Basically I follow Google like their go.gl shortener.
I would love some sort of standard for "short" URLs that are primarily designed for sharing. This would serve as an alternative to the current – usually hierarchial – URLs that are primarily designed around SEO.
The bas thing with existing third-party URL shorteners is that they might (and do) go out of business at any time! I'm not sure whether we need a standard for it, but it might be a good idea to be indendent of such external services.
A simple, generic way could be a hash of the original URL with a compact representation:
/s stands for "short" and the ID is the hash representation. If you use a-zA-Z0-9 plus some URL safe characters like _ it should be reasonably short even for large sites. The CMS or whatever software doesn't have to to implement something, because it is mostly just a generic URL shortener running on the same host.
It would love to see something like a "short" link tag. For example imagine that YouTube provided the following tag on video pages
<link rel=short href="https://youtu.be/abcdef"/>
Then copying the URL could copy the short link instead.
That being said I like when the URL is readable. The YouTube example works because everyone knows that YouTube is a video. Presumably for non-video links they could use something like /user/foobar to differentiate from their "main product". But for blog posts I would much rather see something like /short-urls rather than /su as the reader gets nothing from the first one.
Only learning about this now. Thanks for pointing it out. I remember seeing this in the source of some blogs, great to know the exact snippet of code I should be using.
I have found one truly valuable use case for shorter URLs: when a link is included in a printed document. The problem is that every URL shortener replaces the domain, which is IMO a highly dangerous thing.
I like this and I'm unreasonably happy that once I built a tiny website to share my music, where you get a bunch of listening options when visiting `example.com/listen/songname` :) another cool example would be this guys blog (look at the URL): https://there.oughta.be/a/wifi-game-boy-cartridge
I like this idea, It'll be good to have the best of both worlds. Is there a way to optimize search engine visibility for such short urls? perhaps with a good title, meta tags having keywords etc.? Is there a HTML meta tag that gives the search engine crawlers an alternate hierarchical url for the same page? I believe setting up the server to do this is not too difficult.
Regarding the example of instead of hi.html just name the file hi and use the nginx Config “default_type text/html” —- is there an equivalent to this for other webservers (Eg Caddy etc) ? Perhaps it would be cleaner to just create a directory called hi with and index.html?
I usually create a "file filter" instead. Keep the extension as is (hi.html) and then use the try_files directive in nginx to make the server look for files with the same path using .html as an example. It has been a while since I used nginx but it looks something like this:
That's that many static site generator do, use a folder and index.html for "clean urls".
Though personally I would do as the other comment said, keep it hi.html and use try_files or mod_rewrite to handle it. I mean, ask me to set up something like this and I probably would never have though of doing it the way in the article. It just seems weird to me.
I don't understand the benefit of this at all. I've been dropping the ".html" from URls for ages now, not by modifying the filename, but using Apache's MultiViews. Apart from anything else, some tools still benefit from a filename's extension advertising that file's type, even if they shouldn't.
I usually create a "file filter" instead. Keep the extension as is (hi.html) and then use the try_files directive in nginx to make the server look for files with the same path using .html as an example. It has been a while since I used nginx but it looks something like this:
this are my battle-proven URL rules. whenever i violated them and changed the priority I regretted it.
URL-rules
URL-Rule 1: unique (1 URL == 1 resource, 1 resource == 1 URL)
URL-Rule 2: permanent (they do not change, no dependencies to anything)
URL-Rule 3: manageable (equals measurable, 1 logic per site section, no exceptions)
URL-Rule 4: easily scalable logic
URL-Rule 5: short
URL-Rule 6: with a variation (partial) of the targeted phrase
URL-Rule 1 is more important than 1 to 6 combined, URL-Rule 2 is more important than 2 to 6 combines, … URL-Rule 5 and 6 are a trade-of. 6 is the least important.
the URL yoursite.com/short violated rule number 3. lets say you have list pages, category pages, tag pages, database generated pages, post pages .... now say you change your listpages, in design, in how they look, you want to know if they now work better (for whatever metrik you care about or not), well good luck with that. either you create a regex of hell for GA or whatever tool you are using or just a spreadsheet.
or you do
example.com/l/short for list pages, example.cm/a/super-short for article pages, now easy to measure and manage.
also dont do
example.com/category-name/article as all hierarchies are imperfect and change over time (there is no permanent or perfect hierarchy over time) so any kind of hierarchy violates URL rule number 2.
use %namespaces%, these are one or two letter words which identify the pagetype (listpage, article page) (note: pagetype defined as "share the same or very similar process on how the page gets created, changing the template or process of the pagetype, changes all pages under this pagetype)
https://www.example.com/%country or language namespace%/%pageytpe namespace%/%permanent identifier i have under total control%/ for multimarket and/or multilanguage webproperties
Probably not great. The understanding is that having keywords in the URL is valuable. This makes sense since these locations are "more expensive" so if the keyword is there it must actually be relevant. So it would make sense to weight keywords found in the following locations in decreasing value:
1. Public Suffix Domain (costs money to register).
2. Subdomain (255 char limit in domains)
3. URL (generally URLs aren't too long)
4. Content (basically free to stuff keywords into).
> You can remember them. You can avoid the search engine step. No need to search when you already know the answer. Which means…
You don't know what you don't know. Discovery is about leveraging common heuristics. The more context information that's available, the easier it gets to get to the answer. Giving the least amount of information runs counter to that.
A URL is a reference, an identifier, that uniquely references a resource. A search engine essentially captures a ton of context information that you can leverage to get to a set of relevant references. An identifier can be a meaningless string of characters - e.g. a UUID - and as long as it's accompanied with context information, you can get to the resource it references.
Conversely, if you capture context in the URL itself - meaningful words, dates, authors,... - you're actually providing a breadcrumb trail for visitors - people and robots - to follow, leveraging common heuristics they might use to get to that web resource directly. Might, because discovery is always a process of making educated guesses and following cowpaths to get to the right answer.
So, no, making URL's shorter isn't necessarily advantageous.
In the same vain: long passwords using commonly, easily to remember words instead of an unintelligible string of 16 characters.
Tangentially, I also have ambiguous feelings over the widespread use of URL shorteners. Partly because they act as a middle man providing brittle URL's that can - and will - break in the long run. Partly because they hide a ton of potential context information that might be captured in the original URL.
> You can tell someone. You can even say it out loud! Whether answering an email or talking to someone on the phone, I can say, “Go to sive.rs/ff for my talk about the first follower.” or “My newest book is at sive.rs/h.” I do this often, so having memorable URLs saves me a lot of searching.
Again, context matters. This might work if you want to highlight specific content - e.g. a marketing page for your book - but it certainly doesn't work all the time for all your content. Is that blogpost from 2007 really that important that you need to be able to "say it out loud" to someone on the phone today at any moment?
Short URL's come at a cost. What's the trade off you're making here?
> They look nicer. They’re aesthetic. They show care.
I don't care. Really. I don't. I care about readability and accessibility. Sure enough, URL's with tons of non-nonsensical query parameters are a blight, but this has more to do with readability then "aesthetics". It's an URL, not poetry.
> They remove the middle-man. With long URLs, people use those ugly social share buttons that promote (and further entrench) harmful social media sites, and add visual clutter to your site. Short URLs encourage people to copy and paste the URL directly, which lets them share it anywhere, instead of only the sites for which you have a share button.
Or maybe the answer here is to avoid using social share buttons on your website at all?
> They’re enough. Using 36 characters (a-z and 0-9): 4-character URLs give you 1,679,616 (36⁴) unique combinations.You don’t need more than that.
Well, how about "sive.rs/qxfa" or "sive.rs/kxig" or "sive.rs/ddiz"? It's an argument that's in direct contradiction with the author's first argument. Again, heuristics matter. Readability matters. A big chunk of those 1.6 million odd unique combinations aren't usable of the bat because they are simply unintelligible strings of characters without meaning.
> Go to sive.rs/ff for my talk about the first follower
I absolutely will not. At least with known URL shorteners I know that’s what I’m getting and can inspect further. When I get links like this I assume I’m getting spam or malware. Almost all of the time it’s spam.
The downvotes are because sive.rs is not a URL shortner. It's the blog this post (both the one you're quoting, and the link within) is written on. It's advocating for shorter permalinks.
A simple example is:
https://www.kozubik.com/
... where you can find "items":
https://www.kozubik.com/items/
... and one thing inside of "items" is an article on NDS emitters:
https://www.kozubik.com/items/nds/
... which contains supporting multimedia objects:
https://www.kozubik.com/items/nds/images/
Not only do you know where you are in the tree, but it is also discoverable: by using index.html default pages and your web servers directory indexing code, you can have a very flexible resource that is easy to navigate.