I don't keep a "dick bar" that sticks to the top of the page to remind you which site you're on. Your browser is already doing that for you.
A variation of this is my worst offender, the flapping bar. Not only it takes space, it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust. The hysteresis to hide it back is usually too big and that makes you potentially overscroll again.
Special place in hell for those who hide the flap on scroll-up but show it again when the scroll inertia ends, without even pulling back.
Can’t say here what I think about people who do the above, but you can imagine.
Another common problem with overlayed top bars is that when following fragment links within a page, the browser scrolls the page such that the target anchor is at the top of the window, which then means it’s hidden by the top bar. For example, when jumping to a subsection, the subsection title (and the first lines of the following paragraph text) will often be obscured by the top bar.
Funnily enough for years I would say the general consensus on HN was that it was a thoughtful alternative to having to scroll back to the top, esp back when it was a relatively new gimmick on mobile.
I remember arguing about it on HN back when I was in uni.
It can actually be done correctly, like e.g. safari does it in the top-urlbar mode.
- When a user scrolls content-up in any way, the header collapses immediately (or you may just hide it).
- When a user scrolls content-down by pulling, without "a kick", then it stays collapsed.
- When a user "kick"-scrolls content-down, i.e. scrolls carelessly, in a way that a when finger lifts, scroll still has inertia -- then it gets shown again. Maybe with a short activation distance or inertia level to prevent ghost kicks.
As a result, adjusting text by pulling (including repeatedly) won't flap anything, and if a user kick-scrolls, then they can access the header, if it has any function to it. It sort of separates content-down scroll into two different gestures, which you just learn and use appropriately.
But instead most sites implement the most clinical behavior as described in the comment above. If a site does that, it should be immediately revoked a dns record and its owner put on probation, at the legislative level.
Most mobile browsers lack a "home" key equivalent (or bury it in a not-always-visible on-screen soft-keyboard). That's among the very few arguments in favour of a "Top" navigation affordance.
I still hate such things, especially when using a desktop browser.
On iOS, tapping on the top ”status” area will bring you to the top under any browser. It’s an iOS-wide functionality on any vertically scrolling view. I sometimes miss that on Android, but on the other hand the scroll acceleration is so much faster on Android that you can always scroll to the top quickly.
I think some, if not most, mobile browsers - even apps - used to implement it via a space near the top of the window/screen. That seems to have gone away, though.
In Firefox you can disable this behavior under Settings -> Customize -> Gestures. If your browser does not have an equivalent setting, get a better browser.
I do have Firefox (Fennic Fox F-Droid) installed on that tablet. The reading experience is so vastly inferior despite numerous capabilities of Firefox (most especially browser extensions) that it's not even funny. Mostly because scrolling on e-ink is a disaster.[1]
Chrome/Chromium of course is an absolute disaster.
EinkBro has incorporated ad-blocking, JS toggle, and cookie rejection, which meet most of my basic extension needs. The fact that it offers a paginated navigation (touch regions to scroll by a full screen) works far better with e-ink display characteristics.
I'll note that on desktop I also usually scroll by screen, though that's usually by tapping the spacebar.
--------------------------------
Notes:
1. The thought does occur that Firefox/Android might benefit by an extension (or set of same) which address e-ink display characteristics. Off the top of my head those would be:
- Paginated navigation. The ability to readily scroll by a full page, rather than touch-and-drag scrolling.
- High-contrast / greyscale optimisation. Tweaking page colours such that reading on e-ink is optimised. Generally that would be pure black/white for foreground/background, and a limited greyscale pallette for other elements. Halftone dithering of photographic images would also be generally preferable.
- An ability to absolutely freeze any animations and/or video unless specifically selected.
- Perhaps: an ability to automatically render pages in reader mode, with the above settings enabled.
- Other odds'n'sods, such as rejecting any autoplay (video, audio), though existing Firefox extensions probably address that.
I suspect that much of that is reasonably doable.
There is an "E-ink Viewable" extension which seems to detect and correct for dark-mode themes (exceedingly unreadable on tablets, somewhat ironically), though it omits other capabilities: <https://addons.mozilla.org/en-US/firefox/addon/e-ink-viewabl...>.
Max Lumi, which is now a couple of cycles old. It's the 13.3" tablet.
Looks as if its current rev is the Note Max, Android 13, and a resolution of 300 dpi (the Max Lumi is 220 dpi, which is already damned good). That's pretty much laser-printer resolution (most are effectively ~300 -- 600 dpi). I wish they'd up the onboard storage (Note Max remains at 128 GB, same as the previous device, mine is 64 GB which is uncomfortably tight).
The Android rev is still a couple of versions old (current is 16, released December 2024), though I find that relatively unimportant. I've mostly de-googled my device, install few apps, and most of those through F-Droid, Aurora Store where that doesn't suffice.
If the Max is too spendy / large for you, the smaller devices are more reasonably priced. I went big display as I read quite a few scanned articles and the size/resolution matter. A 10" or 8" display is good for general reading / fiction, especially for e-book native formats (e.g., ePub). If you read scans, larger is IMO better.
I'm aware and not happy with the GPL situation, but alternatives really don't move me.
Onyx's own bookreader software is actually pretty good and sufficient for my purposes, though you can install any third-party reader through your Android app repo you prefer.
My main uses are e-book reading (duh!), podcasts (it's quite good at this, AntennaPod is my preferred app), Termux (Linux userland on Android). For Web browsing, EinkBro and Fennic Fox (as mentioned up-thread). The note-taking (handwritten) native app is also quite good and I use that far more than I'd anticipated.
If you're looking for games, heavy web apps, video, etc., you won't be happy. If you're looking for a device that gets you away from that, I strongly recommend their line.
I've commented on the Max Lumi and experience (positives and negatives) quite a few times here on HN:
Thanks! Frankly so long as I can get a browser and install some reader apps (Kobo, Manga-one, etc.) that would fit my needs fine, and as long as they support older versions of Android for enough years (or I can avoid upgrading the app version) then things should be fine. The 10.3" Boox is 80k JPY which is a bit pricey, though, but I'll consider it vs the Kobo device next time I upgrade e-readers.
FWIW, I also hear good things about Kobo, though I don't have direct experience.
Those are based on Alpine Linux rather than Android, AFAIU, and if you're into Linux are apparently more readily customised and hacked.
(The fact that BOOX is Android is a misfeature for me, though it does make many more apps available. As noted, I use few of those and could replace much of their functionality with shell or Linux-native GUI tools. I suspect battery management would suffer however.)
It works since forever on ios in most (native) apps, including the browser. Tap on the "clock" to scroll up - that is the home button. In safari you might need to tap again, if the header was collapsed.
The ACM's site has a bar like that, though it's thin enough that the issue is with the animations rather than the size: it expands then immediately collapses after even a pixel's worth of scrolling, so it's basically impossible to get at with the "hide distracting elements" picker.
I've yet to encounter a "dick bar" that doesn't jerk the page around when it collapses. Not smooth at all. I'm surprised that it hasn't been solved in 10 years.
Hm, is that really the same thing? It's not doing the "it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust".
It may not be such an egregious example as what GP comment was referring to, but it was the first thing that came to my mind. Maybe the Medium UI has improved somewhat since I was last annoyed by this.
Text littered with hyperlinks on every sentence. Hyperlinks that do on-hover gimmicks like load previews or charts. Emojis or other distracting graphics (like stock ticker symbols and price indicators GOOG +7%) littered among the text.
Backgrounds and images that change with scrolling.
Popups asking to allow the website to send you notifications.
Page footers that are two pages high with 200 links.
Fine print and copyright legalese.
Cookie policy banners that have multiple confusing options and list of 1000 affiliate third parties.
It can be done tastefully. I think this commenter is talking about the brief period where it was fashionable to install plugins or code on your site that mindlessly slaps "helpful" tooltips on random strings. I always assumed it was some AdSense program or SEO that gave you some revinue or good boy Google points for the number of external links on a page.
In the modern day we've come full circle. Jira uses AI to scan your tickets for non-English strings of letters and hallucinates a definition for the acronym it thinks it means, complete with a bogus "reference" to one of your documents that doesn't mention the subject. They also have RAINBOW underlines so it's impossible to ignore.
I really appreciate hyperlinks that serve as citations, like “here’s some prior art to back up what I’m saying,” or that explain some joke, reference, jargon, etc. that the reader might not be familiar with, but unfortunately a lot of sites don’t use them that way.
Case in point: in the Tom's Hardware article about AMD's Strix Halo (1), there's this sentence:
> AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation _benchmarks_. (emphasis mine)
I clicked on "benchmarks", expecting to see some, well, benchmarks for the new CPU, hoping to see some games like Cyberpunk that I might want to play. But no, it links to /tag/benchmark.
Another: "Related" interstitial elements scattered within an article.
Fucking NPR now has ~2--6 "Related" links between paragraphs of a story. I frequently read the site via w3m, and yes, will load the rendered buffer in vim (<esc>-e) to delete those when reading an article.
I don't know if it's oversensitisation or progressive cognitive decline, but even quite modest distracting cruft is increasingly intolerable.
If you truly have related stories, pile them at the end of the article, and put in some goddamned microcontent (title, description, publication date) for the article.
As I've mentioned previously, my "cnn-sanify" script which strips story links and headlines from CNN's own "lite" page, and restructures those into section-organised, time-sorted presentation. Mostly for reading from the shell, though I can dump the rendered file locally and read it in a GUI browser as well.
My biggest disappointment: CNN's article selection is pretty poor. I'd recently checked against 719 stories collected since ~18 December 2024, and of the 111 "US" stories, 54% are relatively mundane crime. Substantive stories are the exception.
(The sense that few of the headlines really were significant was a large part of why I'd written the organisation script in the first place.)
> Fucking NPR now has ~2--6 "Related" links between paragraphs of a story.
Some sites even have media, like videos or photo carousels in or before an article, the content of which isn't related to the article at all. So you get this weird page where you're reading an article, but other content is mixed in around each paragraph, so you have no idea what belongs where.
Then add to that all ads and references to other sections of "top stories" and the page becomes effectively unreadable without reader mode. You then left with so little content that you start questioning if you're missing important content or media.... You're normally not.
I don't believe that these pages are meant for human consumption.
> Text littered with hyperlinks on every sentence.
This is the biggest hassle associated with reading articles online. I'm never going to click on those links because:
- the linked anchor text says nothing about the website it's linking to
- the link shows a 404 (common with articles 2+ years old)
- the link is probably paywalled
Very annoying that article writing guidelines are unchanges from the 2000s where linkrot and paywalls were almost unheard of.
Something I wish more site owners would consider is that if you expose endpoints to the internet, expect users to interact with them however they choose. Instead of adding client-side challenges that disrupt the user experience, focus on building a secure backend. And please, stop shipping business logic to the frontend - especially if you're going to obfuscate it so badly that it ends up breaking on non-Chrome browsers because that's the only browser you test with.
Of course, there are exceptions. If you genuinely need to use a WAF or add client-side challenges, please test your settings properly. There are websites out there that completely break on Linux simply because they are using Akamai with settings that just don't match the real world and were only tested on Mac or Windows. A little more care in testing could go a long way toward making your site accessible to everyone.
My favorite experience was trying to file taxes on Linux in Germany.
Turns out the backend on ELSTER had written code that if Chrome and Linux then store to test account. It wasn't possible to file taxes on Linux for over 6 months until they fixed it when they went online as a mandatory state-funded web service. I can't even comprehend who writes code like that.
Took me also a very long while to explain to the BKA that I did not try to hack them, and that they are just very incompetent people working at DATEV.
It sounds like the easiest solution would be to install another browser (e.g. Firefox) until they fixed the issue. If it is only the combination of Chrome and Linux that is the problem, that is.
I agree with most of this. If every website followed these, the web would be heaven (again)...
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
Some old-enough browsers don't support SSL. At all.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
Even older browsers that support SSL often lack up-to-date root certificates, which prevents them from establishing trust with modern SSL/TLS certificates.
Fairly recently I attempted to get an (FPGA-emulated) Amiga, a G4 Power Macintosh running System 9.2, and a Win2000sp4 Virtual Machine online (just for very select downloads of trusted applications, not for actual browsing). It came as a huge surprise to find that the Win2K VM was the biggest problem of the three.
So? If they still power on and are capable of talking HTTP over a network, and you don't require the transfer of data that needs to be secured, why shouldn't you "let" them online?
Usually browsers on hobbyist legacy operating systems, to which modern browsers haven’t or can’t be ported, not to mention keeping root certificates up to date. Or even if they do support SSL, then only older algorithms and older versions of the protocol. It’s nice to still be able to browse at least part of the web with those.
The problem is usually SSL support, the problem is that older SSL and TLS versions are being disabled.
I actually have an example myself - an iPad 3. Apple didn't allow anyone else than themselves to provide a web browser engine, and at some point they deliberately stopped updates. This site used to work, until some months ago. I currently use it for e-books, if that wasn't the case I think it by now it would essentially be software bricked.
I acknowledge that owning older Apple hardware is dumb. I didn't pay for it, though.
When you force TLS/HTTPS, you are committing both yourself (server) and the reader (client) to a perpetual treadmill of upgrades (a.k.a. churn). This isn't a value judgement, it is a fact; it is a positive statement, not a normative statement. Roughly speaking, the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum - or else they are not compatible.
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.
I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.
... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.
I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.
As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.
My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.
Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)
So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.
People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.
Imagine if it was required that every single device or machine on your life to be designed within 5 years of every other one or they wouldn't work together.
We would be constantly trying to finish a home we could actually use, and forget about fruits or wood agriculture.
There's something deeply broken about computers. And that's from someone deeply on the camp that "yes, everybody must use TLS on the web".
> There's something deeply broken about computers.
It's just not that mysterious, if we want our communications to be secure (we do) then we can't reasonably use ciphers that have been broken, since any adversary can insert themselves in the middle and negotiate both sides down to their most insecure denominator, if they allow it.
What about governments? In my country they perform MITM attacks against unencrypted HTTP, while the best they can do with HTTPS is to block the site. I'd much prefer everyone enforcing HTTPS at all times.
Those are some of the most pedantic grasping at straws reasons I've ever read. It's like they know there's nothing wrong with http so they've had to invent worst case nightmare scenarios to make their "It's so important" reasons stick.
Https is great. I use it. That website is pathetic though.
this is the statement of someone who wasn't around in 2013 when the snowden leaks happened and google's datacenters got owned. everyone switched to https shortly thereafter
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
My first impulse is to scream obscenities at you because I've seen this argument so many times repeated that I tend just keep quiet.. I don't think you can't understand, but I think you refuse to.
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
HTTP/2 doesn't matter in this case, there are only 4 files to transfer. The webpage itself (html), then the style sheet (css), then the feed icon and favicon. You can do with only the html, the css makes it look better, and the other two are not very important.
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
I have pings in excess of 300 ms to her site. TCP connections need a lot of time to "warm up" before speeds become acceptable. It's easy to say things like "http2 does not matter" when you're single digit milliseconds away from all major datacenters.
HTTP/2 matters on bloated websites with tons of external resources, it is not the case here. HTTP/2 will not get you the first HTML page faster and this is the only thing needed here to start showing you something.
In terms of round trips, HTTP/1.1 without TLS will do one less than HTTP/2 with TLS, and as much as HTTP/3 with TLS.
It's annoying that every time "they" come up with a new antipattern, "we" have to add yet another extension to the list of mandatory things for each browser. And it also promotes browser monopoly because extensions get ported slowly to non-mainstream browsers.
It would be better to have a single extension like uBlock origin to handle the browser compatibility, and then release the countermeasures through that. In fact, ublock already has "Annoyances" lists for things like cookie banners, but I don't think it includes the dick bar unfortunately.
Incidentally, these bars are always on sites where the navbar takes 10% vertical space, cookie banner (full width of course) takes another 30% at the bottom, their text is overspaced and oversized, the left/right margins are huge so the text is like 50% of the width... Don't these people ever look at their own site? With many of these, I'm so confused how anyone could look at it and say it's good to go.
The president was very insistent that we show popup ads at six different points in time, until he got home and got six popup ads, and said, “You know what? Maybe just two popups.”
The OP blames "various idiot web 'designers'" for problems, but in my 30 years of being a web designer I have yet to meet one designer that wants to cause these problems. It's usually people responsible for generating revenue.
The web designers and developers are at the very least complicit. They are ultimately the ones typing in the code and hitting submit, so they at least must share the blame.
True, I'm just saying I don't think that's where the problems originate.
In my practice, I'll try a Jedi mind trick, e.g. "Trying to [state larger goal] makes a lot of sense. An even more effective way to do that is to [state alternate, non-toxic technique]."
Extensions are already there: ubo, stylebot. We just have to invent a way to share user-rule snippets across these. There will always be a gray zone between trusted adblock lists included by default and some preferential things.
Nice. This may be my pet peeve on the modern internet. Nearly EVERY site has a dick bar, and the reason I care is it breaks scrolling with spacebar, which is THE most comfortable way to read long content, it scrolls you a screen at a time. But a dickbar obscures the first 1 to…10? lines of the content, so you have to scroll back up. The only thing worse than the dickbar is the dickbar that appears and disappears depending on last direction scrolled, so that each move of the scrolling mechanism changes the viewport size. A pox on them all.
> Nearly EVERY site has a dick bar, and the reason I care is
that when reading on my laptop screen, it takes up valuable vertical space on a small display that is in landscape mode. I want to use my screen's real estate to read the freaking content, not look at your stupid branding bar.
And I don't need any on-page assistance to jump back to the top of the page and/or find the navigation. I have a "Home" key on my keyboard and use it frequently.
TBF, many people don't have that Home key. I agree with you, though - there should be a better solution. At the very least, just have an optional "Top of page" toolbar button in your browser.
On Android I think the typical interaction if to "fling" the page downward which will rapidly scroll until you crash into the top. Seems adequate for all but the longest pages.
I often scroll with the space bar instead of more modern contrivances like arrow keys, scroll wheels, trackpoints, or trackpads. Sites with these header bars always seem to scroll the entire viewport y size instead of (y - bar_height), so after I hit space I have to up-arrow some number of times to see the next line of text that should be visible but is hidden under the bar.
I am usually the first old man to yell at any cloud, and I was overjoyed when someone invented the word "enshittening" for me to describe how the internet has gotten, but it surprised me a bit that people found that one annoying. I can see the problem of it sticking the top of the page with a logo (which is basically an ad and I hate those), but they usually have a menu there, so I always thought of them a bit like the toolbar at the top of an application window in a native desktop application. FWIW when I've built those, I've always de-emphasized the branding and focused on making the menus obvious and accessible.
I'm happy to learn something new about other people's preferences, though. If people prefer scrolling to the top, so be it!
EDIT: It occurs to me that this could be a preference setting. A few of the websites that have let me have my way, I've started generating CSS from a Django template and adding configuration options to let users set variables like colors--with really positive feedback from disabled users. At a fundamental level, I think the solution to accessibility is often configurability, because people with different disabilities often need different, mutually incompatible accommodations.
Another thing to check for with sticky headers is how it behaves when the page is zoomed. Often, the header increased in size proportionately, which can shrink down the effective reading area quite a bit. Add in the frequent sticky chat button at the bottom, and users may be left with not a lot of screen to read text in.
There can be a logic to keeping the header at the top like a menu bar, and I applaud you if you take an approach that focuses on value to the user. Though I'd still say most sites that use this approach, don't have a strong need for it, nor do they consider smaller viewports except for portrait mobile.
Configuration is great, though it quickly runs into discoverability issues. However it is the only way to solve some things - like you pointed out with colors. I know people who rely on high contrast colors and others that reduce contrast as much as they effectively can.
This is exactly what CSS was designed for: allowing you to define your personal style preferences in your browser, applying them across all websites. The term ‘cascading’ reflects this purpose.
Unfortunately, the web today has strayed far from its original vision. Yet, we continue to rely on the foundational technologies that were created for that very vision.
IMO browsers are broadly dropping the ball and failing to be "the user's agent." Instead they are the agents of web developers, giving them the powers that users should have.
If browsers catered to their user's desires more than they cater to developers, the web wouldn't be so shitty.
This is going to be an unpopular opinion, but I think the beginning of the end was the invention of JavaScript. Pulling down an unknown chunk of code from the internet and running it is malware. Even if browsers successfully sandbox the JS (a promise which they've failed to keep numerous times) it can do all sorts of stuff that doesn't serve me, like mine crypto (theft of resources) or display ads (adware).
The primary benefit of web applications is they don't lose your data. Not a single web application UI that exists provides as good a user experience as the native desktop applications that came before. A web where browsers provided their own UIs for various document types, and those document types could not modify their UIs in any way, period, would be a better web. You serve up the document, I get to control how it looks and behaves.
I agree with a lot of the complaints on this article except I think like two, and this is one of them. I think a sticky header is incredibly useful, and they're not something new. Books have sticky headers! Every page of a book will generally list the title and author on the top of each page. I find it just a useful way to provide context and to help me remember who/what I'm reading. The colours/branding of the sticky header communicate that much better to me than the tiny text-only title/url of my browser. And the favicon like-wise doesn't contain enough details for me to latch onto it.
But for UX: (1) Keep it small and simple! It shouldn't be more than ~2 lines of text. (2) Make it CSS-only; if you have to use custom JS to achieve a certain effect, be ready to spend a LOT of time to get the details right, or it'll feel janky. (3) Use `scroll-padding` in the CSS to make sure links to sections/etc work correctly.
I prefer it because I read by scrolling down one line at a time. This means that when I want to go back and read the previous couple of lines, I have to scroll up. This shows a big stupid menu of unknown size and behaviour on top of the text I'm trying to re-read.
The biggest problem for me is the randomness between different sites. It's not a problem for Firefox to display a header when I scroll up, since I can predict its behaviour. My muscle memory adapts by scrolling up and then down again without conscious thought. It's a much bigger problem if every site shows its header slightly differently.
I think the key thing is that when I scroll up, 95% of the time I want to see the text up the page, and at most maaaaaaaybe 5% of the time I want to open the menu. This is especially true if I got to your website via a search engine. I don't give a damn what's hidden in your menu bar unless it's the checkout button for my shopping cart, and even then I'd prefer you use a footer for that.
You can disable this as a browser setting in some browsers (like Chrome). It was driving me nuts until I figured out I could just flip a global flag for it.
There's probably a better way, but I use this graphical element zapper from Ublock Origin to hide distracting elements.
Works wonders for sites that I visit regularly. StackOverflow: Do I need related posts? The left sidebar (whatever it is they have there, I have forgotten already)? Their footer?
It's amazing to me what people tolerate, just because it doesn't seem like a human is doing it to us. If a door-to-door salesman was told to do the equivalent of this stuff, they'd be worried about being punched in the face.
The logic here is that it's you who come to visit, not them. But the next issue is that everyone agrees it's not normal for a private service either, even if it's free, and it should be disallowed. But laws don't work like that. We simply have no law systems that could manage that, nowhere on this planet.
If the world was a nightclub, the tech industry would be a creepy guy who goes up to everyone and says "You're now dating me. [Accept or Try Again Later]"
this is exactly the sort of idealistic post that appeals to HN and nobody else. i dont have a problem with that apart from when technologists try to take these "back to basics" stuff to shame the substacks and the company blogs out there that have to be more powered by economics than by personal passion.
its -obvious- things are mostly "better"/can be less "annoying" when money/resources are not a concern. i too would like to spend all my time in a world with no scarcity.
the engineering challenge is finding alignments where "better for reader" overlaps with "better for writer" - as google did with doubleclick back in the day.
You do remember the punch the monkey ads that were just an animated gif that you could click anywhere and “win”. They were an early form of engagement bait.
There were unscrupulous people posting on Usenet for monetary gain before the web
This actually appeals to everyone. There are words and people can read them. It literally just works. With zero friction. This is peak engineering. It's how the web is supposed to work. It is objectively better. For everyone. Everyone except advertisers.
The only problem to be solved here is the fact advertisers are the ones paying the people who make web pages. They're the ones distorting the web into engagement maximizing content consumption platforms like television.
All the tracking stuff is better for advertisers than going without, and most writers are paid by advertisers. So transitively it would be reasonable to say that tracking is good for writers and bad for readers.
People oversell this tracking/advertising. It's not a goldmine for every site. For this blog, if she wanted to include analytics into her decision about what content to produce, does she really need super high resolution stuff like where people moved their mouse? Would she ever make a significant income from these "ads", or selling the data for possibly pennies?
Besides, just google analytics or something like that wouldn't be that bad (I know the blog author would disagree). A lot of sites go nuts and have like 20 different trackers that probably track the same things. People just tack stuff on, YAGNI be damned, that's a big part of the problem and it's a net drain on both parties.
> just google analytics or something like that wouldn't be that bad
Google Analytics is the worse. Not on an individual website but by the fact it is almost everywhere. So Google has been getting everyone's web history since more than a decade.
Add Android, gmail, the social "share" or "login with" integrations and any Stasi member would have called you delirious for thinking this kind of surveillance apparatus was possible. Even more that people would willingly accept it.
I mostly agree with this. Commercial websites probably should track engagement and try to increase it. They should probably use secure http. They probably should not care about supporting browsers without JS. If they need sign in then signing in with Google is useful. There's no harm in having buttons to share on social media if that will help you commercially.
Where I think the post hits on something real is the horrible UI patterns. Those floating bars, weird scroll windows, moving elements that follow you around the site. I don't believe these have been AB tested and shown to increase engagement. Those things are going to lose you customers. I genuinely don't understand why people do this.
Your argument is that writers do this because of "economics", but to the detriment of readers. I don't see how this extends only to HN readers. It applies to all readers in general.
The author isn't trying to profit from the reader's attention; it's just a personal blog. An ad-based business would. Neither is right or wrong, but the latter is distinctly annoying.
Ad-based businesses exist because a lot of people (including many on this forum) refuse to pay for anything. During the late 1990s/early 2000s, people hated paying for anything and demanded that everything on the Internet should be free. Well, that led to the vast surveillance machine which powers Google, Facebook, and every ad-tech business out there. They need surveillance because it lets them serve more relevant ads and more relevant ads make more money.
The bottom line is if you hate ad-based businesses, start paying for things.
Netflix does $30B in revenue. Spotify over $10B. Steam estimated around $10B. Those are are services where anyone could figure out how to get the stuff for free with a few minutes of research. People pay when they perceive value.
A better way to characterize what's happening is that there is a lot of material out there that no one would ever pay for, so those companies instead try to get people's attention and then sell it.
Their bait never was and never will be worth anything. People aren't "paying with ads"; they're being baited into receiving malware, and a different group of people pay for that malware delivery.
A personal take is that ad-based businesses exist because there’s no secure widespread reliable approach for micropayments (yet?).
The mean value of adverts on a page is in the order of a tiny fraction of a cent per reader, which is presumably enough for the businesses that continue to exist online. If it was possible to pay this amount directly instead, and have an ad-free experience, I suspect many would do so, as the cumulative amount would usually be negligible. Yet so far, no-one’s figured it out.
(I should mention, there are very strong reasons why it’s difficult to figure out currently, but AIUI these are rooted in the current setup of global payments and risk management by credit card companies.)
I reject that idea, simply because companies will then offer microservice access and STILL put ads on them. Sure, youtube and spotify will disable them for you. But for every one of those we have a netflix and its dark side cohort.
Is there anything that indicates that customers want micropayments?
Music and video streaming services are syndicating content from millions of creators into single subscription services. Why is it so impossible to make mega conglomerates for textual content? Why is nobody doing this?
Right now, creators are forced to make YouTube videos, because that's their most viable path to getting paid for their work. Why does it have to be this way, when a lot of what they do would be better as text instead of as a talking head?
I pay for things, but often there is no option to pay. When there is, eventually the company figures out that they can have you pay AND show you ads. Then the argument becomes "well if they opt not to have ads, you should pay more for the privilege."
But no matter the cost of a thing, you can always "make more" by adding ads and keeping the cost as is. So eventually, every service seems to decide that, well, you DESERVE the ads, even if you pay.
Sure, competition could solve this, but often there isn't any.
Yes, it’s the individuals’ fault. Google, FB, and the rest need to spy on us! I feel just awful for those poor companies.
No. If your business model requires you to do evil things, your business should not exist.
Anyway, I do pay for services that provide value. I was a paying Kagi customer until recently, for example (not thrilled with the direction things are going there now though).
Product development disagreements are largely immaterial to me, though the discussion around their integrations with Yandex remind me of prior discussions around their integrations with Brave.
Substack's UI is fairly minimal and does not appear to have many anti-patterns. My only complaint is that it is not easy to see just the people I am subscribed to.
On the first or second page view of any particular blog, the platform likes to greet you with a modal dialog to subscribe to the newsletter, and you have to find and click the "No thanks" text to continue.
Once you're on a page with text content, the header bar disappears when you scroll downward but reappears when you scroll upward. I scroll a lot - in both directions - because I skim and jump around, not reading in a rigidly linear way. My scrolling behavior is perfectly fine on static/traditional pages. It interacts badly with Substack's "smart" header bar, whose animation constantly grabs my attention, and also it hides the text at the top of the page - which might be the very text I wanted to read if it wasn't being covered up by the "smart" header bar.
Substack absolutely refuses to stop emailing you. It's simply not possible to subscribe to paid content and NOT have them either email you or force push notifications. Enough people have complained about this that it's pretty obvious this is intentional on their part.
Let's be real. If you have a website were you are trying to sell something to your page visitors (not ad clicks or referral links), then each of these annoyances and hurdles increase the risk that a potential customer backs out of it.
If you give great customer service, you get great customers – and they don't mind paying a premium.
If you're coercing customers, then you get bad customers – and they are much more likely to give you trouble later.
Most business owners are your run of the mill dimwits, because we live in a global feudal economic system – and owning a business doesn't mean you are great at sales or have any special knowledge in your business domain. It usually just means you got an inheritance or that you have the social standing to be granted a loan.
Back in the day it used to be the way to do it. Web servers would serve example.com/foo/index.html when visiting example.com/foo/ and it was the easy way of not having example.com/foo.html as your URL.
Later on Web servers made it easier to load foo.html for example.com/foo, but that wasn't always the case.
It's been a couple of decades since I had to do it, but at least that's my memory on why I've been doing this since forever (including on newer websites, which is admittedly a mistake).
I've got nothing against it. Just parse it as "the default file for that path" since one isn't specified. Just like you expect that for the main page e.g. "https://news.ycombinator.com/" or "https://google.com/" except as a subdirectory.
> both trailing slash variant examples you have provided (HN and Google) do redirect to non slash ones
This is incorrect. Chrome (and Firefox by default?) have the broken behavior of showing bare URLs like "google.com" or even "https://google.com". But this is absolutely wrong according to the URL spec and HTTP spec. After all, even if you want to visit "https://google.com", the first line that your browser sends is "GET / HTTP/1.1". Notice the slash there - it is mandatory, as you cannot request a blank path.
Things were better in the old days when browsers didn't mess around with reformatting URLs for display to make them "look" human-friendly. I don't want "www." implicitly stripped away. I don't want the protocol stripped away. I don't want the domain name to be in black but the rest of the text to be in light gray. I just want honest, literal URLs.
In Firefox, this can be accomplished by setting: browser.urlbar.formatting.enabled = false; browser.urlbar.trimURLs = false.
I like it as a clear visual indication of where the URL ends. And a page often consists of more than just one file (images etc.) (the HTML portion of the page is just an implicit index.html in the directory), so it does make some sense.
But is an article an index to the attached media? Not even "just", but "at all"? Is this the right abstraction? Or do we have a better one, achievable by simply removing the trailing slash?
We discuss this in the context of cruft, user friendliness, and selecting proper forms of expression, which the original article seems to be all about, by the way.
My point is that “an article” generally doesn’t consist of just a single file. Or maybe rather, that the distinction between a file or document and a folder isn’t necessarily meaningful. Physical “files” (i.e. on paper, as in a filing cabinet) actually commonly used to be folders. And there is little difference between a zip file (which are used for a number of popular document formats) and the unzipped directory it contains.
Unlike in file systems, we don’t have the directory–file distinction on the web, from the user perspective. Everything shown in the browser window is a page. We might as well end them with a slash. If anything, there is a page–file distinction (page display in the browser vs. file download). I agree that URLs for single-file downloads (PDFs, images, whatever) should not have a trailing slash.
> I don't do popups anywhere. You won't see something that interrupts your reading to ask you to "subscribe" and to give up your e-mail address.
Every time I get hit with a popup by a site I usually just leave. Sometimes with a cart full of items not yet paid for. It's astounding that they haven't yet learned that this is actually costing them business. Never interrupt your customers.
Same goes for stores. If I walk into your store to browse and you accost me with "Can I help you" I'll be looking for an exit ASAP.
> I don't pretend that posts are evergreen by hiding their dates.
I didn't realise that hiding dates for the illusion of evergreen-ness was a desirable thing!
On my personal site I added dates to existing pages long after they were uploaded for the very reason I wanted it to be plenty clear that they were thoughts from a specific time.
For example, a bloggish post I wrote where, while I still think it's right, it now sounds like linkedin wank. I'm very happy for that one to be obviously several years old.
Supposedly it’s an SEO thing. The theory is that Google is biased towards novelty and so penalises older articles (although I’m not sure how removing the date would help because surely Google would still know how long the article has been online for.)
I have no idea how true that is but I remember hearing SEO folks talk about it a few years back.
Some content mills seem to display a date but automatically update it periodically. Sometimes you can outright see it can't be correct since the information is woefully out of date or you can check from Internet Archive that the content is the same as before but with a bumped date.
A web page can be looked at as a document or an application. Those publishers that use it as a document produce a wholly better experience for reading text. There's a lot of pressure to go down the application path, even Wikipedia has done it, which sucks big time.
I think many of these are just design trends. As in, I think in a lot of cases web designers will add these “features” not for a deeply considered reason, but simply because that’s the thing everyone else seems to be doing.
I’ve had to be pretty firm in the past with marketing teams that want to embark on a rebrand, and say however the design looks, it can’t include modal windows or animated carousels. And I think people think you’re weird when you say that.
> I’ve had to be pretty firm in the past with marketing teams that want to embark on a rebrand, and say however the design looks, it can’t include modal windows or animated carousels. And I think people think you’re weird when you say that.
Some small businesses create websites for branding only, and get their business exclusively offline. They just want to have a simple, static site to say "we exist, and we are professionals", so they are fine with the latest in web design.
Right. What I’m suggesting is their simple static site should probably just show the content they want to show, rather than write extra code [and add additional complexity] which makes that content gratuitously slide around the screen.
Carousels exist because everyone wants their pet project to be on the home page, and no one at the company has enough willpower to put a stop to that nonsense. No one actually likes the things.
Look up any typographic manual and you'll learn that you can't make lines of text too wide or else people will have trouble reading them. Example - https://practicaltypography.com/line-length.html .
This is also related to why professional newspapers and magazines lay out text in relatively narrow columns, because they are easy to scan just top-down while hardly moving your eyes left-right.
I do think that vertical phones are too narrow for conveying decent text, but you also can't have completely unbounded page widths because people do run browsers maximized on desktop 4K screens.
That's true, but 60 characters is way toward the "too narrow" side of the scale. I'd fatten the page to ~45--55 em (or rem), and BTW, strongly prefer relative font-sized units to pixels, which ... are increasingly unreliable as size determinants, particularly as high-def, high-dot-pitch displays on monitors, smartphones, and e-ink displays become more common. Toto we're not in 96 dpi land any more.
I also strongly prefer at least some padding around the edges of pages / text regions, with 5--10% usually much easier to read.
I'd played with making those changes on Rachel's page through Firefox's inspector:
html { font-family: garamond, times, serif; }
body { max-width: 50em; }
.post { padding 2em 4em; }
Unless you're banging directly on the framebuffer, logical pixels haven't been tied to device pixels for literally decades. CSS specifies pixels at 1/96 of an inch, a decision that goes all the way back to X11. 1rem == 16px, though this can be changed in CSS (just set font-size on the :root element) whereas you can typically only change pixel scaling in your display settings.
So yes, using rems is better, but pixels are not going to get dramatically smaller on denser displays unless the device is deliberately scaling them down (which phones often do simply because they're designed to be read up-close anyway)
My experience, for decades, has been that ems / rems are almost always preferable for scaling anything that's relative to text: body width, margins, padding, etc.
It's also possible to scale text itself to the reader's own preference if any by setting the body font size to "normal". Assuming the reader has set that value in their browser, they get what they expect, and for the 99.99966% percent of people who go with their browser's shitty default, well, they can zoom the page as needed.
(Most people don't change defaults, which is one key reason to use sane ones in products and projects.)
Sites which use px or pt (o hai HN) for scaling of text or fonts absolutely uniformly fail to please for me.
(See my HN madhackery CSS mods links in my profile here, that's what I'm looking at as I type this here. On my principle e-ink browser, those aren't available, and I'm constantly fiddling with both zoom and contrast settings to make HN usable.)
Making pixel-based styling even more janky by not being actual pixels any more seems ... misguided.
That research may be true, but the layout of the page should be up to the user, not imposed by the developer. If I want my browser to display a web page using the entire maximized 4K browser window, that should be something 1. I can easily configure and 2. web developers respect, no matter what the "typographic researchers" think.
You might be more sophisticated than the average reader. Less sophisticated readers will just navigate away instead of messing with settings they don't understand.
The style of the page can use CSS column properties to make use of the width of laptop/tablet displays, instead of defaulting to ugly "mobile size fits all" templates.
While an interesting post because of the number of examples provided, this does read like somebody patting themselves on the back for building a website like it's 1995, when websites were not designed with the intention of making money or acting as a lead gen funnel.
Let's have a look at the websites she's helped build at her job and see how many of those old web principles were applied.
One of my gripes with venture capital is that, were I to accept a large amount, I would be required to do all of those annoyances as part of the marketing plan they would impose on me.
And I feel a lot of those measure have been unnecessary - thinking back to time at enterprise software product vendors, they had myriads of those kind of annoyances to track "engagement" on their page.
The actual customers? Basically the big banks, in one case. Just how much were all those marketing/tracking cookies and scripts doing to secure those sales leads? Each bank had, essentially, its own dedicated salesperson/account manager - I don't think any bank is picking a vendor because one website had more marketnig tracking scripts on it than another.
I agree with pretty much everything (and have implemented my own blog like it), but I would like to expand on a few things:
> I don't load the page in parts as you scroll it. It loads once and then you have it.
Lazy loaded images are helpful for page performance reasons, as done by <img loading="lazy">. I have a script that flips one to eager (so it loads immediately) every few seconds depending on page load speed, so that it you leave the page alone for a while, it fully loads.
> I don't put godawful vacuous and misleading clickbait "you may be interested in..." boxes of the worst kind of crap on the Internet at the bottom of my posts, or anywhere else for that matter.
Most of the posts on my blog[0] are whatever videogame that I was just playing. Often, I'll play through installments in a series, or mention one game while talking about another. While I litter the text with links back to previous entries, I feel that it would be helpful to have a collection of these near the bottom of the page. How else would you know that I've written about the sequel? (I don't like to go back old posts and add links to newer stuff like that.)
I have a "you might be interested in" section. My algorithm: do a search on the post title (up to the first number or colon, but can be customized), then add recent posts from the category you're looking at. Limit 6. I feel that genuinely shows everything relevant that I got and not be 'misleading' or 'clickbait'.
> I don't force people to have Javascript to read my stuff.
Agreed! JS should be used to enhance the experience, not be the experience. This mindset is so baked into how I write it, that most of my blog's JS functions have "enhance" in them.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Didn't we learn anything from Snowden? The NSA has recorded your receipt of this message.
Recently some webpages started failing with an JavaScript error message, something is not supported even if my JS is activated. Probably the device which I used was „outdated“.
Obviously the webpage and its full text content is shown to me for 3s before the error message appears and blocks the access.
Is there a way to instruct browsers, when available, to just go right into reader mode? I wonder if when your page is so minimal like this one, you may as well just do that instead.
Or I guess at that point, you just don’t do styles?
Of all the things some people don’t do with their webpage, I’m the biggest fan of not doing visual complexity.
My iOS Safari has it. I turned it on for the NYT, because I wanted a dark theme and then turned it off again because I realized that I like what they do with their pages (still have an ad blocker turned on though, because subscribers still see tons of ads).
My favourite pet peeve is opening an interesting looking link in a new tab, and then when I finally get around to looking at the tab, it's just a giant overlay with a prompt for an email address, with no hint of what I was actually trying to read.
Letting marketing folks on the internet was a mistake.
The one annoyance inflicted is the pointless container-for-everything with rounded corners. It makes the web page optically smaller on mobile and seems to serve no purpose.
Just extend the background to the very corners like hacker news does!
> I don't do some half-assed horizontal "progress bar" as you scroll down the
page. Your browser probably /already/ has one of those if it's graphical. It's
called the scroll bar. (See also: no animations.)
Sadly, I would argue that this is inaccurate. Especially on mobile browsers, the prevalence of visible scroll bars seems to have dropped off a cliff. I'll happily excuse the progress bar, especially because this one can be done without JavaScript.
JS progress bars also generally show you your progress through the main-content div or whatever, so even if they have a particularly egregious footer (I've seen footers that are over 1000px tall, with embedded youtube videos), the progress through the actual content is still somewhat faithfully reported.
Better would be to ditch the absurd footer, but still.
actually bookmarked since Rachel has mentioned several annoyances that it is easy to accidentally include even if you have the best of intentions. Wish she gave this in checklist and categorized form instead of long-text.
LOL'ed at "dick bar" - seriously that thing is so annoying.
Being nitpicky, and since the article itself focuses on things not inflicted on users, here are a few things it still inflicts on users:
- Changing line-height.
- Changing fonts (or trying to, if it is allowed in a web browser).
- Changing colors (likewise).
- Changing body's max-width, margins, paddings.
- Adding a mostly useless header.
I find these less annoying than the ones listed in the article, and they are easily mitigated by the reader view, disabled CSS, or custom global CSS, but there they are.
I used to agree with you, but a pure text web looks like Gemini, which I abandoned after a few days of getting lost in endless identical looking blogs.
There is no reason that websites shouldn't have room for some creative expression. For as long as writing has existed, images, fonts, spacing, embellishment, borders, and generally every imaginable axis has been used as additional expression, beyond the literal meaning of the text.
The body width is necessary because web browsers have long since abandoned any pretense of developing html for the average joe. It is normal to use web browsers maximized, so without limiting the body width the text is ridiculously long and uncomfortable to read.
It's great that some people are fighting back against this. But it's too late. The modern web is unusable without browser extensions or ad/annoyance blockers.
I agree with pretty much everything on that page except:
> Web page annoyances that I don't inflict on you here / I don't use visitor IP addresses outside of a context of filtering abuse.
This point bit me personally about 5 years ago. As I browsed HN at home, I found that links to her website would not load - I would get a connection timed out error. Sometimes I would bookmark those pages in the hopes of reading them later. By accident, I noticed that her website did load when I was using public Wi-Fi or visited other people's homes.
I assumed it was some kind of network routing error, so I emailed my Canadian ISP to ask why I couldn't load her site at my home. They got back to me quickly and said that there were no networking problems, so go email the site operator instead. I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive. She quickly unbanned my IP or some range containing me, and I never experienced any problems again. And no, I never did anything that would warrant a ban; I clicked on pages as a human user and never botted her site or anything like that, so I'm 100% sure that I was collateral damage for someone else's behavior.
The situation I saw was a very rare one, where I'd observe different behaviors depending on which network I accessed her site from. Sure, I would occasionally see "verification" requests from megacorps like Google/CAPTCHA, banks, Cloudflare, etc. when I changed networks or countries, but I grew to expect that annoyance. I basically never see specific bans from small operators like her. I don't fault her for doing so, though, as I am aware of various forms of network and computer system abuse, and have implemented a few countermeasures in my work sporadically.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Agreed, but I would like HN users to submit the HTTPS version. I'm not doing this to virtue-signal or anything like that. I'm telling you, a number of years ago when going through Atlanta airport, I used their Wi-Fi and clicked on a bunch of HN links, and the pages that were delivered over unsecured HTTP got rewritten with injections of the ISP's ads. This is not funny and we should proactively prevent that by making the HTTPS URL be the default one that we share. (I'm not against her providing an HTTP version.)
As for everything else, I am so glad that her web pages don't have fixed top bars, the bloody simulated progress bar (I like my browser's scrollbar very much thank you), ample visual space wasted for ads (most mainstream news sites are guilty), space wasted mid-page to "sign up to my email newsletter", modal dialog boxes (usually also to sign up to newsletter), etc.
> As I browsed HN at home, I found that links to her website would not load
Thanks for mentioning this, because I was having the same issue and I was surprised no one was mentioning that the site was (appeared to be) down. Switching to using a VPN made the post available to me.
It's probably reasonable to use HSTS to force https-aware browsers to upgrade and avoid injection of all the things she hates. Dumb browsers like `netcat` are not harmed by this at all. But even then ... why aren't you using `curl` or something?
> It's probably reasonable to use HSTS to force https-aware browsers to upgrade and avoid injection of all the things she hates.
There's a broad spectrum between a browser that is "aware" of https and a browser that has all the cipher suites, certificates, etc to load a given page.
If a browser does not support modern TLS (SSL), it probably also has unpatched security flaws. Unpatched browsers should never be used on the Internet because they will get hacked.
Sure but as a server operator, who cares? I already have zero trust in the client and it's not my job to punish the user for not being secure enough. If they get pwned, that's their problem.
Unless I'm at work where there's compliance checkboxes to disallow old SSL versions I'll take whatever you have.
Everyone using HTTPS protects everyone. Having some operators choose to not migrate to HTTPS-only websites makes the web less secure by increasing the surface area of attacks on users.
Such a good list, I may have to copy it for my own site and stuff it somewhere as a "colophon" of sorts. Maybe this kind of thing should even be a machine-readable standard...
Rachel, I'm curious as to your mentions of 'old posts' that may not be compliant, e.g. missing an alt attribute - is this something you've considered scanning your html files for and fixing?
Over the last year I’ve gotten a couple of offers from PCB manufactures to make my projects in exchange for a review and visibility in my projects and on my site. It was tempting, but every time I thought about doing it, it felt off.
I really like writing to readers and not obligating them to anything else. No sales push, no ads, no sign ups. It’s nice that it’s just what I wanted to share.
Ah, but that's just the stylistic churn. There's also monetization, illustrated by this dark and terrifying tale, best-viewed in a desktop browser for proper impact:
Previously I've used the "disable styles" shortcut key in the Firefox web developer extension to make unfriendly websites more tolerable. Today, I wish Chrome had a shortcut key for enabling reader mode to do the same.
Progress bars are annoying because that’s what scroll bars are for, and because horizontal progress bars (a) have the wrong orientation and (b) look like a loading/download indicator.
> Safari recently gained the ability to "hide distracting items"
I just looked into this feature and it looks awesome! Is there a way to do this in chrome? If not, are there any available chrome extensions that do this?
Is there a way to do anything in chrome now? It became your personal google port and will soon disable any content-modification for the sake of your adsecurity and prinvadcy.
Maybe it's a combination of my Framework's 2256x1504 resolution and the 2x scaling I'm using, but some web pages' "dick bars" cover a full third of the screen - it's infuriating. It makes sites like Rachel's doubly refreshing (1).
It does seem like something's off about the feed. Vienna can read the file, but it comes up empty. But it doesn't seem like the problem is standards non-compliance.
Other posters mentioned her IP block - I wouldn't be surprised if that was the cause since automated netnewswire traffic might easily be confused with abuse.
NNW works for every other site that has an RSS feed and someone else just commented that while it’s a valid atom feed, when they try to use it in another newsreader, they get an empty result
That's a lot of words to say "I don't need to make money from this, and really only want to publish some texta".
Everything follows from that, but not just in a bad, dark-pattern profit-optimizing way.
If you provide a paid service, you need auth, and then you damn well better use HTTPS.
If you have anything more complex or interactive than text-publishing, you'll quickly run into absurd limitations without cookies and JavaScript. And animations and things like sticky headers can genuinely improve the usability.
For the style of reading I normally do, this particular width is actively harmful to my reading comprehension. I would prefer just a bit wider text generally. This is something which the site does inflict on the reader. I agree that many sites are too wide in general, but this feels is too narrow by about 33% for my liking.
Additionally the way that the background degrades to a border around the text when using dark reader also causes problems in a similar way (due to the interaction between jagged text and a strong vertical line.
These are subtle points though, and I appreciate the many annoyances that are not there when reading Rachel's stuff.
> I don't do some half-assed horizontal "progress bar" as you scroll down the
> page. Your browser probably /already/ has one of those if it's graphical.
> It's called the scroll bar.
... unless you use GTK, and then it hides the scroll bar because it's sooo clever and wants to bestow a "clean" interface upon you. Yes, I'm looking at you Firefox.
Gee, if only there were a search engine that penalised pages and down ranked them for any of these annoyances, especially advertising, so one could get results that didn't annoy you when you visited them. oh wait, that would be a Google killer... don't want to go there...
> Now about SSL/TLS, we programmers are often forced to do this because there are many users who, when faced with the absence of the padlock on the page, don't even bother to continue for fear of having their data stolen.
I got to experience this last week: some family member uses the gmail app to consult hotmail emails. Suddenly the app started asking to reenter login information: the message looked like a fishing mail. When you clicked on it it popped what looked like the outlook openID login page but without any address bar shown. Is it the app? Some webpage? Looks like fishing.
Perfect job from the UI team: either you don't update your credential because it really looks like a fishing attempt or you get trained to use those credentials in random apps / website.
I don't keep a "dick bar" that sticks to the top of the page to remind you which site you're on. Your browser is already doing that for you.
A variation of this is my worst offender, the flapping bar. Not only it takes space, it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust. The hysteresis to hide it back is usually too big and that makes you potentially overscroll again.
Special place in hell for those who hide the flap on scroll-up but show it again when the scroll inertia ends, without even pulling back.
Can’t say here what I think about people who do the above, but you can imagine.
Another common problem with overlayed top bars is that when following fragment links within a page, the browser scrolls the page such that the target anchor is at the top of the window, which then means it’s hidden by the top bar. For example, when jumping to a subsection, the subsection title (and the first lines of the following paragraph text) will often be obscured by the top bar.
You can somewhat solve this by using some css to specify the offset from the top the anchor should be. `scroll-margin-top`
Yes, but many sites don't.
Funnily enough for years I would say the general consensus on HN was that it was a thoughtful alternative to having to scroll back to the top, esp back when it was a relatively new gimmick on mobile.
I remember arguing about it on HN back when I was in uni.
It can actually be done correctly, like e.g. safari does it in the top-urlbar mode.
- When a user scrolls content-up in any way, the header collapses immediately (or you may just hide it).
- When a user scrolls content-down by pulling, without "a kick", then it stays collapsed.
- When a user "kick"-scrolls content-down, i.e. scrolls carelessly, in a way that a when finger lifts, scroll still has inertia -- then it gets shown again. Maybe with a short activation distance or inertia level to prevent ghost kicks.
As a result, adjusting text by pulling (including repeatedly) won't flap anything, and if a user kick-scrolls, then they can access the header, if it has any function to it. It sort of separates content-down scroll into two different gestures, which you just learn and use appropriately.
But instead most sites implement the most clinical behavior as described in the comment above. If a site does that, it should be immediately revoked a dns record and its owner put on probation, at the legislative level.
NAK, if I want to see the header, I know where to find it.
Is it actually possible to implement this as a website? Can websites tell if you're scrolling by pulling vs flicking?
Most mobile browsers lack a "home" key equivalent (or bury it in a not-always-visible on-screen soft-keyboard). That's among the very few arguments in favour of a "Top" navigation affordance.
I still hate such things, especially when using a desktop browser.
On iOS, tapping on the top ”status” area will bring you to the top under any browser. It’s an iOS-wide functionality on any vertically scrolling view. I sometimes miss that on Android, but on the other hand the scroll acceleration is so much faster on Android that you can always scroll to the top quickly.
I think some, if not most, mobile browsers - even apps - used to implement it via a space near the top of the window/screen. That seems to have gone away, though.
Worse: "pull to refresh" means that often when you try to scroll to the top of a page ... it reloads instead.
The number of times this has happened whilst I've been editing a post on some rando site, losing my content ...
In Firefox you can disable this behavior under Settings -> Customize -> Gestures. If your browser does not have an equivalent setting, get a better browser.
My main driver is EinkBro (on an e-ink tablet), which similarly doesn't seem to be brain-damaged in this regard.
<https://github.com/plateaukao/einkbro>
I do have Firefox (Fennic Fox F-Droid) installed on that tablet. The reading experience is so vastly inferior despite numerous capabilities of Firefox (most especially browser extensions) that it's not even funny. Mostly because scrolling on e-ink is a disaster.[1]
Chrome/Chromium of course is an absolute disaster.
EinkBro has incorporated ad-blocking, JS toggle, and cookie rejection, which meet most of my basic extension needs. The fact that it offers a paginated navigation (touch regions to scroll by a full screen) works far better with e-ink display characteristics.
I'll note that on desktop I also usually scroll by screen, though that's usually by tapping the spacebar.
--------------------------------
Notes:
1. The thought does occur that Firefox/Android might benefit by an extension (or set of same) which address e-ink display characteristics. Off the top of my head those would be:
- Paginated navigation. The ability to readily scroll by a full page, rather than touch-and-drag scrolling.
- High-contrast / greyscale optimisation. Tweaking page colours such that reading on e-ink is optimised. Generally that would be pure black/white for foreground/background, and a limited greyscale pallette for other elements. Halftone dithering of photographic images would also be generally preferable.
- An ability to absolutely freeze any animations and/or video unless specifically selected.
- Perhaps: an ability to automatically render pages in reader mode, with the above settings enabled.
- Other odds'n'sods, such as rejecting any autoplay (video, audio), though existing Firefox extensions probably address that.
I suspect that much of that is reasonably doable.
There is an "E-ink Viewable" extension which seems to detect and correct for dark-mode themes (exceedingly unreadable on tablets, somewhat ironically), though it omits other capabilities: <https://addons.mozilla.org/en-US/firefox/addon/e-ink-viewabl...>.
"Edge Touch Pager" addresses navigation: <https://addons.mozilla.org/en-US/firefox/addon/edge-touch-pa...>.
And there's a Reddit submission for improving e-ink experiences w/ Firefox generally, which touches on most of the items I'd mentioned above: <https://old.reddit.com/r/eink/comments/lkc0ea/tip_to_make_we...>.
Future me may find this useful....
Which tablet do you use? I've been considering a Boox but the licensing issues and apparent outdated Android give me pause...
Max Lumi, which is now a couple of cycles old. It's the 13.3" tablet.
Looks as if its current rev is the Note Max, Android 13, and a resolution of 300 dpi (the Max Lumi is 220 dpi, which is already damned good). That's pretty much laser-printer resolution (most are effectively ~300 -- 600 dpi). I wish they'd up the onboard storage (Note Max remains at 128 GB, same as the previous device, mine is 64 GB which is uncomfortably tight).
The Android rev is still a couple of versions old (current is 16, released December 2024), though I find that relatively unimportant. I've mostly de-googled my device, install few apps, and most of those through F-Droid, Aurora Store where that doesn't suffice.
If the Max is too spendy / large for you, the smaller devices are more reasonably priced. I went big display as I read quite a few scanned articles and the size/resolution matter. A 10" or 8" display is good for general reading / fiction, especially for e-book native formats (e.g., ePub). If you read scans, larger is IMO better.
I'm aware and not happy with the GPL situation, but alternatives really don't move me.
Onyx's own bookreader software is actually pretty good and sufficient for my purposes, though you can install any third-party reader through your Android app repo you prefer.
My main uses are e-book reading (duh!), podcasts (it's quite good at this, AntennaPod is my preferred app), Termux (Linux userland on Android). For Web browsing, EinkBro and Fennic Fox (as mentioned up-thread). The note-taking (handwritten) native app is also quite good and I use that far more than I'd anticipated.
If you're looking for games, heavy web apps, video, etc., you won't be happy. If you're looking for a device that gets you away from that, I strongly recommend their line.
I've commented on the Max Lumi and experience (positives and negatives) quite a few times here on HN:
<https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>
Thanks! Frankly so long as I can get a browser and install some reader apps (Kobo, Manga-one, etc.) that would fit my needs fine, and as long as they support older versions of Android for enough years (or I can avoid upgrading the app version) then things should be fine. The 10.3" Boox is 80k JPY which is a bit pricey, though, but I'll consider it vs the Kobo device next time I upgrade e-readers.
FWIW, I also hear good things about Kobo, though I don't have direct experience.
Those are based on Alpine Linux rather than Android, AFAIU, and if you're into Linux are apparently more readily customised and hacked.
(The fact that BOOX is Android is a misfeature for me, though it does make many more apps available. As noted, I use few of those and could replace much of their functionality with shell or Linux-native GUI tools. I suspect battery management would suffer however.)
It works since forever on ios in most (native) apps, including the browser. Tap on the "clock" to scroll up - that is the home button. In safari you might need to tap again, if the header was collapsed.
Double tap the top bar and almost all scrolling panels on iOS will whoosh to the top
This is definitely an Android failing, in that case.
The ACM's site has a bar like that, though it's thin enough that the issue is with the animations rather than the size: it expands then immediately collapses after even a pixel's worth of scrolling, so it's basically impossible to get at with the "hide distracting elements" picker.
Now you mention it, I hate hate hate scroll inertia.
I've yet to encounter a "dick bar" that doesn't jerk the page around when it collapses. Not smooth at all. I'm surprised that it hasn't been solved in 10 years.
uBlock Origin element zapper provides an effective solution in my experience.
I use this handy bookmarklet to kill sticky headers::
https://github.com/t-mart/kill-sticky
The bar screws up printing usually.
Any examples? Searching for "flapping bar" didn't yield anything.
I can’t remember sadly. But most articles I’ve read in the last few years were from /news or /newest.
The articles on medium.com are common culprits.
Hm, is that really the same thing? It's not doing the "it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust".
It may not be such an egregious example as what GP comment was referring to, but it was the first thing that came to my mind. Maybe the Medium UI has improved somewhat since I was last annoyed by this.
They did not mention --
Text littered with hyperlinks on every sentence. Hyperlinks that do on-hover gimmicks like load previews or charts. Emojis or other distracting graphics (like stock ticker symbols and price indicators GOOG +7%) littered among the text.
Backgrounds and images that change with scrolling.
Popups asking to allow the website to send you notifications.
Page footers that are two pages high with 200 links.
Fine print and copyright legalese.
Cookie policy banners that have multiple confusing options and list of 1000 affiliate third parties.
Traditional banner and text ads.
Many other dark patterns.
> Hyperlinks that do on-hover gimmicks like load previews or charts.
I haven't seen one that shows charts, but I gotta admit, I miss the hover preview when not reading wikipedia.
Something else that should absolutely be a browser-native feature rather than one each site has to optionally invent poorly and/or inconsistently.
Blink-based browsers have a built-in link preview in a popup which you can turn on.
Agreed, as long as it can be turned off by user on the browser, and doing so does not break the site / ux.
It's native in Safari
It can be done tastefully. I think this commenter is talking about the brief period where it was fashionable to install plugins or code on your site that mindlessly slaps "helpful" tooltips on random strings. I always assumed it was some AdSense program or SEO that gave you some revinue or good boy Google points for the number of external links on a page.
In the modern day we've come full circle. Jira uses AI to scan your tickets for non-English strings of letters and hallucinates a definition for the acronym it thinks it means, complete with a bogus "reference" to one of your documents that doesn't mention the subject. They also have RAINBOW underlines so it's impossible to ignore.
I really appreciate hyperlinks that serve as citations, like “here’s some prior art to back up what I’m saying,” or that explain some joke, reference, jargon, etc. that the reader might not be familiar with, but unfortunately a lot of sites don’t use them that way.
IIRC, suck.com did this really well.
Case in point: in the Tom's Hardware article about AMD's Strix Halo (1), there's this sentence:
> AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation _benchmarks_. (emphasis mine)
I clicked on "benchmarks", expecting to see some, well, benchmarks for the new CPU, hoping to see some games like Cyberpunk that I might want to play. But no, it links to /tag/benchmark.
1: https://www.tomshardware.com/pc-components/cpus/amds-beastly...
Another: "Related" interstitial elements scattered within an article.
Fucking NPR now has ~2--6 "Related" links between paragraphs of a story. I frequently read the site via w3m, and yes, will load the rendered buffer in vim (<esc>-e) to delete those when reading an article.
I don't know if it's oversensitisation or progressive cognitive decline, but even quite modest distracting cruft is increasingly intolerable.
If you truly have related stories, pile them at the end of the article, and put in some goddamned microcontent (title, description, publication date) for the article.
As I've mentioned previously, my "cnn-sanify" script which strips story links and headlines from CNN's own "lite" page, and restructures those into section-organised, time-sorted presentation. Mostly for reading from the shell, though I can dump the rendered file locally and read it in a GUI browser as well.
See: <https://news.ycombinator.com/item?id=42535359>
My biggest disappointment: CNN's article selection is pretty poor. I'd recently checked against 719 stories collected since ~18 December 2024, and of the 111 "US" stories, 54% are relatively mundane crime. Substantive stories are the exception.
(The sense that few of the headlines really were significant was a large part of why I'd written the organisation script in the first place.)
> Fucking NPR now has ~2--6 "Related" links between paragraphs of a story.
Some sites even have media, like videos or photo carousels in or before an article, the content of which isn't related to the article at all. So you get this weird page where you're reading an article, but other content is mixed in around each paragraph, so you have no idea what belongs where.
Then add to that all ads and references to other sections of "top stories" and the page becomes effectively unreadable without reader mode. You then left with so little content that you start questioning if you're missing important content or media.... You're normally not.
I don't believe that these pages are meant for human consumption.
Indeed, they is as though they don't want you to read the content they spend their whole professional life writing, just click on it...
And the interstitials tend to break reader mode.
Use their text mode site:
https://text.npr.org/
That's actually what I'm referring to.
Go ahead and load that up, then start reading articles.
From the current headline set, there's "FBI says suspect in New Orleans attack twice visited the city to conduct surveillance"
<https://text.npr.org/nx-s1-5249046>
That has three occurrences of:
Which is specifically what I was criticising.> put in some goddamned microcontent (title, description, publication date) for the article
Do you mean metadata?
No. Microcontent, a copywriting concept, not an information architecture one.
"Well-written, short text fragments presented out of supporting context can provide valuable information and nudge web users toward a desired action."
<https://www.nngroup.com/articles/microcontent-how-to-write-h...>
See also HTML microformats: https://developer.mozilla.org/en-US/docs/Web/HTML/microforma...
A fascinating and useful concept, but not what I'm referring to.
Microformats are more a semantic-web type thing. I'm talking of information presented to a non-technical reader through the browser.
> Text littered with hyperlinks on every sentence.
This is the biggest hassle associated with reading articles online. I'm never going to click on those links because:
- the linked anchor text says nothing about the website it's linking to - the link shows a 404 (common with articles 2+ years old) - the link is probably paywalled
Very annoying that article writing guidelines are unchanges from the 2000s where linkrot and paywalls were almost unheard of.
Something I wish more site owners would consider is that if you expose endpoints to the internet, expect users to interact with them however they choose. Instead of adding client-side challenges that disrupt the user experience, focus on building a secure backend. And please, stop shipping business logic to the frontend - especially if you're going to obfuscate it so badly that it ends up breaking on non-Chrome browsers because that's the only browser you test with.
Of course, there are exceptions. If you genuinely need to use a WAF or add client-side challenges, please test your settings properly. There are websites out there that completely break on Linux simply because they are using Akamai with settings that just don't match the real world and were only tested on Mac or Windows. A little more care in testing could go a long way toward making your site accessible to everyone.
This.
My favorite experience was trying to file taxes on Linux in Germany.
Turns out the backend on ELSTER had written code that if Chrome and Linux then store to test account. It wasn't possible to file taxes on Linux for over 6 months until they fixed it when they went online as a mandatory state-funded web service. I can't even comprehend who writes code like that.
Took me also a very long while to explain to the BKA that I did not try to hack them, and that they are just very incompetent people working at DATEV.
It sounds like the easiest solution would be to install another browser (e.g. Firefox) until they fixed the issue. If it is only the combination of Chrome and Linux that is the problem, that is.
> I can't even comprehend who writes code like that.
The government. Case in point...
I agree with most of this. If every website followed these, the web would be heaven (again)...
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
Some old-enough browsers don't support SSL. At all.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
Even older browsers that support SSL often lack up-to-date root certificates, which prevents them from establishing trust with modern SSL/TLS certificates.
Fairly recently I attempted to get an (FPGA-emulated) Amiga, a G4 Power Macintosh running System 9.2, and a Win2000sp4 Virtual Machine online (just for very select downloads of trusted applications, not for actual browsing). It came as a huge surprise to find that the Win2K VM was the biggest problem of the three.
How old are these browsers and why should I let them online? Must be decades old.
Android versions prior to 4.4 support only TLS 1.0 which is deprecated and many old devices aren't upgradable. The same for Mobile IE 10.
IE 10 in Windows Server 2008 doesn't support TLS 1.1+ by default.
Yup, my last phone upgrade was prompted by this.
But the old phone is significantly better at making actual phone calls than the new one.
Why is it your job to police the browsers people use?
> Must be decades old.
So? If they still power on and are capable of talking HTTP over a network, and you don't require the transfer of data that needs to be secured, why shouldn't you "let" them online?
Why you shouldn’t use old, unpatched software on an open network that doesn’t support modern protocols?
Beats me.
I don't know about you, but I'd rather my ancient laptop not end up as part of a botnet simply because I visited the wrong website with it.
Usually browsers on hobbyist legacy operating systems, to which modern browsers haven’t or can’t be ported, not to mention keeping root certificates up to date. Or even if they do support SSL, then only older algorithms and older versions of the protocol. It’s nice to still be able to browse at least part of the web with those.
The problem is usually SSL support, the problem is that older SSL and TLS versions are being disabled.
I actually have an example myself - an iPad 3. Apple didn't allow anyone else than themselves to provide a web browser engine, and at some point they deliberately stopped updates. This site used to work, until some months ago. I currently use it for e-books, if that wasn't the case I think it by now it would essentially be software bricked.
I acknowledge that owning older Apple hardware is dumb. I didn't pay for it, though.
When you force TLS/HTTPS, you are committing both yourself (server) and the reader (client) to a perpetual treadmill of upgrades (a.k.a. churn). This isn't a value judgement, it is a fact; it is a positive statement, not a normative statement. Roughly speaking, the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum - or else they are not compatible.
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
>the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum
That's a fair point. HTTP changes more slowly. Makes sense for sites where you're aiming for longevity.
Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.
I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.
... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.
I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.
As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.
My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.
Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)
So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.
> Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html
Oh, if only TLS was that simple!
People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.
Then just use a proxy for your older browsers, I think the onus is not on the site owner but instead the device.
Imagine if it was required that every single device or machine on your life to be designed within 5 years of every other one or they wouldn't work together.
We would be constantly trying to finish a home we could actually use, and forget about fruits or wood agriculture.
There's something deeply broken about computers. And that's from someone deeply on the camp that "yes, everybody must use TLS on the web".
> There's something deeply broken about computers.
It's just not that mysterious, if we want our communications to be secure (we do) then we can't reasonably use ciphers that have been broken, since any adversary can insert themselves in the middle and negotiate both sides down to their most insecure denominator, if they allow it.
Why should web browsers treat http like a bug? Many sites don’t need https.
I used to have an ISP that would inject ads into HTTP sites. Every site needs HTTPS.
Or, your ISP does not deserve to exist.
True but you can’t build distributed systems that rely on every single actor being a good one. Hence encryption, the police, etc.
The police is a good example, instead of reinventing basal language, we instead have a body of people who enforce the law.
It’s not like ISPs are unknown entities.
What about governments? In my country they perform MITM attacks against unencrypted HTTP, while the best they can do with HTTPS is to block the site. I'd much prefer everyone enforcing HTTPS at all times.
> Many sites don’t need https.
Maybe intranet sites. Everything else absolutely should.
https://doesmysiteneedhttps.com/
Those are some of the most pedantic grasping at straws reasons I've ever read. It's like they know there's nothing wrong with http so they've had to invent worst case nightmare scenarios to make their "It's so important" reasons stick. Https is great. I use it. That website is pathetic though.
ISPs injecting ads into HTTP websites isn't a weird edge case. I've seen it happen.
The source footer ("View Page Source") summarizes it perfectly:
Sites that need HTTPS: - all of them
If you like it, you better put a lock on it.
And, BTW, the website is as delightfully simple and unobtrusive as the one in the article.
Every connection should be encrypted.
Unencrypted connections can be weaponized by things like China’s Great Canon.
this is the statement of someone who wasn't around in 2013 when the snowden leaks happened and google's datacenters got owned. everyone switched to https shortly thereafter
Didn't everyone switch to TOR shortly after? :-(
Some people use TOR, but the internet generally started using https for everything.
That's the only one I had an issue with as well.
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
> I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available?
Let me explain it to you like this:
The NSA has recorded your receipt of this message.
Trust me, the NSA tracking what you read is MUCH WORSE than Google tracking what you read. Encryption helps defeat that.
I think you missed the "Use it if you want" part.
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
My first impulse is to scream obscenities at you because I've seen this argument so many times repeated that I tend just keep quiet.. I don't think you can't understand, but I think you refuse to.
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
It's fine on a simple site. But lack of SSL/TLS also effectively disables http2 which is a performance hit, not just a security concern.
>>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
She accepts http AND https requests. So it's your choice, you want to know who you're talking to, or you want speed :)
HTTP/2 doesn't matter in this case, there are only 4 files to transfer. The webpage itself (html), then the style sheet (css), then the feed icon and favicon. You can do with only the html, the css makes it look better, and the other two are not very important.
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
I have pings in excess of 300 ms to her site. TCP connections need a lot of time to "warm up" before speeds become acceptable. It's easy to say things like "http2 does not matter" when you're single digit milliseconds away from all major datacenters.
HTTP/2 matters on bloated websites with tons of external resources, it is not the case here. HTTP/2 will not get you the first HTML page faster and this is the only thing needed here to start showing you something.
In terms of round trips, HTTP/1.1 without TLS will do one less than HTTP/2 with TLS, and as much as HTTP/3 with TLS.
> I don't keep a "dick bar" that sticks to the top of the page to remind you which site you're on.
I use an extension called "Bar Breaker" that hides these when you scroll away from the top/bottom of the page.[0] More people should know about it.
[0] https://addons.mozilla.org/en-US/firefox/addon/bar-breaker/
It's annoying that every time "they" come up with a new antipattern, "we" have to add yet another extension to the list of mandatory things for each browser. And it also promotes browser monopoly because extensions get ported slowly to non-mainstream browsers.
It would be better to have a single extension like uBlock origin to handle the browser compatibility, and then release the countermeasures through that. In fact, ublock already has "Annoyances" lists for things like cookie banners, but I don't think it includes the dick bar unfortunately.
Incidentally, these bars are always on sites where the navbar takes 10% vertical space, cookie banner (full width of course) takes another 30% at the bottom, their text is overspaced and oversized, the left/right margins are huge so the text is like 50% of the width... Don't these people ever look at their own site? With many of these, I'm so confused how anyone could look at it and say it's good to go.
It's not a silver bullet, but I do the following with uBlock Origin:
1. JS disabled by default, only enabled on sites I choose
2. Filter to fix sites that mess with scrolling:
3. Filters for dick bars and other floating elements:To me, what gets rid of those annoying sticky bars that cover half the screen, is this rule:
##[class*="part of the name of the annoying class, generally sticky something"]
this rule is amazing to deal with those randomly generated class names
The president was very insistent that we show popup ads at six different points in time, until he got home and got six popup ads, and said, “You know what? Maybe just two popups.”
— Joel Spolsky, What is the Work of Dogs in this Country? (2001): <https://www.joelonsoftware.com/2001/05/05/what-is-the-work-o...>
The OP blames "various idiot web 'designers'" for problems, but in my 30 years of being a web designer I have yet to meet one designer that wants to cause these problems. It's usually people responsible for generating revenue.
The web designers and developers are at the very least complicit. They are ultimately the ones typing in the code and hitting submit, so they at least must share the blame.
True, I'm just saying I don't think that's where the problems originate.
In my practice, I'll try a Jedi mind trick, e.g. "Trying to [state larger goal] makes a lot of sense. An even more effective way to do that is to [state alternate, non-toxic technique]."
In my career of roughly half as long, I have met plenty. Although it’s also true that it’s often people higher up who are amused by design gimmicks.
Extensions are already there: ubo, stylebot. We just have to invent a way to share user-rule snippets across these. There will always be a gray zone between trusted adblock lists included by default and some preferential things.
> We just have to invent a way to share user-rule snippets across these
Like the User-Scripts of Greasemonkey?
https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/
Yes.
Nice. This may be my pet peeve on the modern internet. Nearly EVERY site has a dick bar, and the reason I care is it breaks scrolling with spacebar, which is THE most comfortable way to read long content, it scrolls you a screen at a time. But a dickbar obscures the first 1 to…10? lines of the content, so you have to scroll back up. The only thing worse than the dickbar is the dickbar that appears and disappears depending on last direction scrolled, so that each move of the scrolling mechanism changes the viewport size. A pox on them all.
Note to web devs: use scroll-padding to fix this: https://developer.mozilla.org/en-US/docs/Web/CSS/scroll-padd...
NAK; stop doing stupid shit and then you don't need browser support to fix the things you break on a case-by-case basis.
No.
Just kill the fucking dickbar.
Dick bar often breaks hash links as well. You click a link that scrolls you to a section, but you can't see the first few lines of it.
> Nearly EVERY site has a dick bar, and the reason I care is
that when reading on my laptop screen, it takes up valuable vertical space on a small display that is in landscape mode. I want to use my screen's real estate to read the freaking content, not look at your stupid branding bar.
And I don't need any on-page assistance to jump back to the top of the page and/or find the navigation. I have a "Home" key on my keyboard and use it frequently.
TBF, many people don't have that Home key. I agree with you, though - there should be a better solution. At the very least, just have an optional "Top of page" toolbar button in your browser.
Ctrl+↑
That's one option. On macOS, it's fn + Left. On Android, I'm not sure there's anything.
On Android I think the typical interaction if to "fling" the page downward which will rapidly scroll until you crash into the top. Seems adequate for all but the longest pages.
Wasn’t it cmd-up on mac?
On iPhone: Touch the top bar
On MacOS: Click the top part of the scroll bar
I often scroll with the space bar instead of more modern contrivances like arrow keys, scroll wheels, trackpoints, or trackpads. Sites with these header bars always seem to scroll the entire viewport y size instead of (y - bar_height), so after I hit space I have to up-arrow some number of times to see the next line of text that should be visible but is hidden under the bar.
IIRC (as a fellow spacebar aficionado) position:fixed breaks spacebar scrolling but position:sticky generally doesn’t.
I just toggle reader mode. Gets rid of this and everything else annoying. For sites where that doesn't work, I zap the bar in uBO.
Brave has an excellent Speedreader mode.
I am usually the first old man to yell at any cloud, and I was overjoyed when someone invented the word "enshittening" for me to describe how the internet has gotten, but it surprised me a bit that people found that one annoying. I can see the problem of it sticking the top of the page with a logo (which is basically an ad and I hate those), but they usually have a menu there, so I always thought of them a bit like the toolbar at the top of an application window in a native desktop application. FWIW when I've built those, I've always de-emphasized the branding and focused on making the menus obvious and accessible.
I'm happy to learn something new about other people's preferences, though. If people prefer scrolling to the top, so be it!
EDIT: It occurs to me that this could be a preference setting. A few of the websites that have let me have my way, I've started generating CSS from a Django template and adding configuration options to let users set variables like colors--with really positive feedback from disabled users. At a fundamental level, I think the solution to accessibility is often configurability, because people with different disabilities often need different, mutually incompatible accommodations.
Another thing to check for with sticky headers is how it behaves when the page is zoomed. Often, the header increased in size proportionately, which can shrink down the effective reading area quite a bit. Add in the frequent sticky chat button at the bottom, and users may be left with not a lot of screen to read text in.
There can be a logic to keeping the header at the top like a menu bar, and I applaud you if you take an approach that focuses on value to the user. Though I'd still say most sites that use this approach, don't have a strong need for it, nor do they consider smaller viewports except for portrait mobile.
Configuration is great, though it quickly runs into discoverability issues. However it is the only way to solve some things - like you pointed out with colors. I know people who rely on high contrast colors and others that reduce contrast as much as they effectively can.
This is exactly what CSS was designed for: allowing you to define your personal style preferences in your browser, applying them across all websites. The term ‘cascading’ reflects this purpose.
Unfortunately, the web today has strayed far from its original vision. Yet, we continue to rely on the foundational technologies that were created for that very vision.
IMO browsers are broadly dropping the ball and failing to be "the user's agent." Instead they are the agents of web developers, giving them the powers that users should have.
If browsers catered to their user's desires more than they cater to developers, the web wouldn't be so shitty.
This is going to be an unpopular opinion, but I think the beginning of the end was the invention of JavaScript. Pulling down an unknown chunk of code from the internet and running it is malware. Even if browsers successfully sandbox the JS (a promise which they've failed to keep numerous times) it can do all sorts of stuff that doesn't serve me, like mine crypto (theft of resources) or display ads (adware).
The primary benefit of web applications is they don't lose your data. Not a single web application UI that exists provides as good a user experience as the native desktop applications that came before. A web where browsers provided their own UIs for various document types, and those document types could not modify their UIs in any way, period, would be a better web. You serve up the document, I get to control how it looks and behaves.
They do and it's called "reader view".
Browsing without reader view enabled by default is like driving your car around with the hand brake engaged.
I agree with a lot of the complaints on this article except I think like two, and this is one of them. I think a sticky header is incredibly useful, and they're not something new. Books have sticky headers! Every page of a book will generally list the title and author on the top of each page. I find it just a useful way to provide context and to help me remember who/what I'm reading. The colours/branding of the sticky header communicate that much better to me than the tiny text-only title/url of my browser. And the favicon like-wise doesn't contain enough details for me to latch onto it.
But for UX: (1) Keep it small and simple! It shouldn't be more than ~2 lines of text. (2) Make it CSS-only; if you have to use custom JS to achieve a certain effect, be ready to spend a LOT of time to get the details right, or it'll feel janky. (3) Use `scroll-padding` in the CSS to make sure links to sections/etc work correctly.
I prefer it because I read by scrolling down one line at a time. This means that when I want to go back and read the previous couple of lines, I have to scroll up. This shows a big stupid menu of unknown size and behaviour on top of the text I'm trying to re-read.
The biggest problem for me is the randomness between different sites. It's not a problem for Firefox to display a header when I scroll up, since I can predict its behaviour. My muscle memory adapts by scrolling up and then down again without conscious thought. It's a much bigger problem if every site shows its header slightly differently.
I think the key thing is that when I scroll up, 95% of the time I want to see the text up the page, and at most maaaaaaaybe 5% of the time I want to open the menu. This is especially true if I got to your website via a search engine. I don't give a damn what's hidden in your menu bar unless it's the checkout button for my shopping cart, and even then I'd prefer you use a footer for that.
Another one: No "Sign in with your Google account" popup covering the upper right corner of the page
You can disable this as a browser setting in some browsers (like Chrome). It was driving me nuts until I figured out I could just flip a global flag for it.
How??
There's probably a better way, but I use this graphical element zapper from Ublock Origin to hide distracting elements.
Works wonders for sites that I visit regularly. StackOverflow: Do I need related posts? The left sidebar (whatever it is they have there, I have forgotten already)? Their footer?
It's amazing to me what people tolerate, just because it doesn't seem like a human is doing it to us. If a door-to-door salesman was told to do the equivalent of this stuff, they'd be worried about being punched in the face.
The logic here is that it's you who come to visit, not them. But the next issue is that everyone agrees it's not normal for a private service either, even if it's free, and it should be disallowed. But laws don't work like that. We simply have no law systems that could manage that, nowhere on this planet.
Imagine if people understood consent the way the tech industry does.
If the world was a nightclub, the tech industry would be a creepy guy who goes up to everyone and says "You're now dating me. [Accept or Try Again Later]"
this is exactly the sort of idealistic post that appeals to HN and nobody else. i dont have a problem with that apart from when technologists try to take these "back to basics" stuff to shame the substacks and the company blogs out there that have to be more powered by economics than by personal passion.
its -obvious- things are mostly "better"/can be less "annoying" when money/resources are not a concern. i too would like to spend all my time in a world with no scarcity.
the engineering challenge is finding alignments where "better for reader" overlaps with "better for writer" - as google did with doubleclick back in the day.
Most people don't remember, and some have never experienced, the Internet before it became a money grab.
I think a lot of people outside of HN would prefer that Internet way more than what we have now.
The Web has been a money grab since Netscape was went public in 1995.
My first for pay project was enhancing a Gopher server in 1993.
Some people making money on the Internet is a lot different than what the Internet has become today - and what I meant by money grab.
You do remember the entire dot com boom and bust, the punch the monkey banner ads, X11 pop under ads, etc?
Don’t romanticize the early internet.
I lived through the early internet, and by and large you didn’t come across much page-view monetization or engagement-maximization tactics.
What do you call banner ads?
Banner ads back then were more obnoxious. Today they are deceptive.
The point being made here is that it wasn’t perfect before but for many it was better.
You do remember the punch the monkey ads that were just an animated gif that you could click anywhere and “win”. They were an early form of engagement bait.
There were unscrupulous people posting on Usenet for monetary gain before the web
Oh man. Pop under ads. Those sucked. I’d still take the old Internet with pop under ads over what we have today.
This actually appeals to everyone. There are words and people can read them. It literally just works. With zero friction. This is peak engineering. It's how the web is supposed to work. It is objectively better. For everyone. Everyone except advertisers.
The only problem to be solved here is the fact advertisers are the ones paying the people who make web pages. They're the ones distorting the web into engagement maximizing content consumption platforms like television.
The words are nice and all, but it's no https://ciechanow.ski/
The nice thing about your example is that it works even in eww (emacs), and quite well (not the JS part, of course).
To me, it seems like basically everything on this page is both better for reader and better for writer. Which ones are not, in your opinion?
All the tracking stuff is better for advertisers than going without, and most writers are paid by advertisers. So transitively it would be reasonable to say that tracking is good for writers and bad for readers.
People oversell this tracking/advertising. It's not a goldmine for every site. For this blog, if she wanted to include analytics into her decision about what content to produce, does she really need super high resolution stuff like where people moved their mouse? Would she ever make a significant income from these "ads", or selling the data for possibly pennies?
Besides, just google analytics or something like that wouldn't be that bad (I know the blog author would disagree). A lot of sites go nuts and have like 20 different trackers that probably track the same things. People just tack stuff on, YAGNI be damned, that's a big part of the problem and it's a net drain on both parties.
> just google analytics or something like that wouldn't be that bad
Google Analytics is the worse. Not on an individual website but by the fact it is almost everywhere. So Google has been getting everyone's web history since more than a decade.
Add Android, gmail, the social "share" or "login with" integrations and any Stasi member would have called you delirious for thinking this kind of surveillance apparatus was possible. Even more that people would willingly accept it.
I mostly agree with this. Commercial websites probably should track engagement and try to increase it. They should probably use secure http. They probably should not care about supporting browsers without JS. If they need sign in then signing in with Google is useful. There's no harm in having buttons to share on social media if that will help you commercially.
Where I think the post hits on something real is the horrible UI patterns. Those floating bars, weird scroll windows, moving elements that follow you around the site. I don't believe these have been AB tested and shown to increase engagement. Those things are going to lose you customers. I genuinely don't understand why people do this.
> appeals to HN and nobody else
Your argument is that writers do this because of "economics", but to the detriment of readers. I don't see how this extends only to HN readers. It applies to all readers in general.
The author isn't trying to profit from the reader's attention; it's just a personal blog. An ad-based business would. Neither is right or wrong, but the latter is distinctly annoying.
Ad-based businesses are indeed wrong and immoral.
Ad-based businesses exist because a lot of people (including many on this forum) refuse to pay for anything. During the late 1990s/early 2000s, people hated paying for anything and demanded that everything on the Internet should be free. Well, that led to the vast surveillance machine which powers Google, Facebook, and every ad-tech business out there. They need surveillance because it lets them serve more relevant ads and more relevant ads make more money.
The bottom line is if you hate ad-based businesses, start paying for things.
Netflix does $30B in revenue. Spotify over $10B. Steam estimated around $10B. Those are are services where anyone could figure out how to get the stuff for free with a few minutes of research. People pay when they perceive value.
A better way to characterize what's happening is that there is a lot of material out there that no one would ever pay for, so those companies instead try to get people's attention and then sell it.
Their bait never was and never will be worth anything. People aren't "paying with ads"; they're being baited into receiving malware, and a different group of people pay for that malware delivery.
A personal take is that ad-based businesses exist because there’s no secure widespread reliable approach for micropayments (yet?).
The mean value of adverts on a page is in the order of a tiny fraction of a cent per reader, which is presumably enough for the businesses that continue to exist online. If it was possible to pay this amount directly instead, and have an ad-free experience, I suspect many would do so, as the cumulative amount would usually be negligible. Yet so far, no-one’s figured it out.
(I should mention, there are very strong reasons why it’s difficult to figure out currently, but AIUI these are rooted in the current setup of global payments and risk management by credit card companies.)
I reject that idea, simply because companies will then offer microservice access and STILL put ads on them. Sure, youtube and spotify will disable them for you. But for every one of those we have a netflix and its dark side cohort.
Is there anything that indicates that customers want micropayments?
Music and video streaming services are syndicating content from millions of creators into single subscription services. Why is it so impossible to make mega conglomerates for textual content? Why is nobody doing this?
Right now, creators are forced to make YouTube videos, because that's their most viable path to getting paid for their work. Why does it have to be this way, when a lot of what they do would be better as text instead of as a talking head?
Flattr 2.0 (and Brave ?) got pretty close.
I pay for things, but often there is no option to pay. When there is, eventually the company figures out that they can have you pay AND show you ads. Then the argument becomes "well if they opt not to have ads, you should pay more for the privilege."
But no matter the cost of a thing, you can always "make more" by adding ads and keeping the cost as is. So eventually, every service seems to decide that, well, you DESERVE the ads, even if you pay.
Sure, competition could solve this, but often there isn't any.
>They need surveillance because it lets them serve more relevant ads and more relevant ads make more money.
Don't they have enough money?
Does this work? Which paid platform doesn’t eventually start showing ads to paid users?
Pinboard is the obvious example that springs to mind.
Yes, it’s the individuals’ fault. Google, FB, and the rest need to spy on us! I feel just awful for those poor companies.
No. If your business model requires you to do evil things, your business should not exist.
Anyway, I do pay for services that provide value. I was a paying Kagi customer until recently, for example (not thrilled with the direction things are going there now though).
What is the direction that things are going at Kagi now? What were they before?
All the AI shit, plus this
https://old.reddit.com/r/ukraine/comments/1gvcqua/psa_the_ka...
Product development disagreements are largely immaterial to me, though the discussion around their integrations with Yandex remind me of prior discussions around their integrations with Brave.
Either way, thanks for sharing.
People won't pay for things they can get free elsewhere. People happily pay for differentiated goods.
Substack's UI is fairly minimal and does not appear to have many anti-patterns. My only complaint is that it is not easy to see just the people I am subscribed to.
Substack fails on several points for me.
On the first or second page view of any particular blog, the platform likes to greet you with a modal dialog to subscribe to the newsletter, and you have to find and click the "No thanks" text to continue.
Once you're on a page with text content, the header bar disappears when you scroll downward but reappears when you scroll upward. I scroll a lot - in both directions - because I skim and jump around, not reading in a rigidly linear way. My scrolling behavior is perfectly fine on static/traditional pages. It interacts badly with Substack's "smart" header bar, whose animation constantly grabs my attention, and also it hides the text at the top of the page - which might be the very text I wanted to read if it wasn't being covered up by the "smart" header bar.
Substack absolutely refuses to stop emailing you. It's simply not possible to subscribe to paid content and NOT have them either email you or force push notifications. Enough people have complained about this that it's pretty obvious this is intentional on their part.
Doesn't substack nag you to log in? That's a non-starter for me
Substack disable zooming on mobile and I hate it.
Really? I can still zoom in and out in the normal way on a Substack article on Safari and iOS.
What did they disable exactly?
I see the zoom-breaking on android. I also see the top and bottom dick bars, a newsletter popup on every article, and links opening in new windows.
Let's be real. If you have a website were you are trying to sell something to your page visitors (not ad clicks or referral links), then each of these annoyances and hurdles increase the risk that a potential customer backs out of it.
If you give great customer service, you get great customers – and they don't mind paying a premium.
If you're coercing customers, then you get bad customers – and they are much more likely to give you trouble later.
Most business owners are your run of the mill dimwits, because we live in a global feudal economic system – and owning a business doesn't mean you are great at sales or have any special knowledge in your business domain. It usually just means you got an inheritance or that you have the social standing to be granted a loan.
Nice read.
I wish there was one more paragraph though:
"I don't use trailing slashes in article URLs. Blog post is a file, not an index of a directory, so why pretend otherwise?"
But then it's http://rachelbythebay.com/w/2025/01/04/cruft/ , so I guess they don't agree.
Back in the day it used to be the way to do it. Web servers would serve example.com/foo/index.html when visiting example.com/foo/ and it was the easy way of not having example.com/foo.html as your URL.
Later on Web servers made it easier to load foo.html for example.com/foo, but that wasn't always the case.
It's been a couple of decades since I had to do it, but at least that's my memory on why I've been doing this since forever (including on newer websites, which is admittedly a mistake).
I've got nothing against it. Just parse it as "the default file for that path" since one isn't specified. Just like you expect that for the main page e.g. "https://news.ycombinator.com/" or "https://google.com/" except as a subdirectory.
I do parse it that way. My position is that it's not the proper form for the function.
Note that both trailing slash variant examples you have provided (HN and Google) do redirect to non slash ones.
In fact, personally, I don't expect leading slashes for main pages.
>> "https://news.ycombinator.com/" or "https://google.com/"
> both trailing slash variant examples you have provided (HN and Google) do redirect to non slash ones
This is incorrect. Chrome (and Firefox by default?) have the broken behavior of showing bare URLs like "google.com" or even "https://google.com". But this is absolutely wrong according to the URL spec and HTTP spec. After all, even if you want to visit "https://google.com", the first line that your browser sends is "GET / HTTP/1.1". Notice the slash there - it is mandatory, as you cannot request a blank path.
Things were better in the old days when browsers didn't mess around with reformatting URLs for display to make them "look" human-friendly. I don't want "www." implicitly stripped away. I don't want the protocol stripped away. I don't want the domain name to be in black but the rest of the text to be in light gray. I just want honest, literal URLs.
In Firefox, this can be accomplished by setting: browser.urlbar.formatting.enabled = false; browser.urlbar.trimURLs = false.
I like it as a clear visual indication of where the URL ends. And a page often consists of more than just one file (images etc.) (the HTML portion of the page is just an implicit index.html in the directory), so it does make some sense.
I like your argument about implicit index.
But is an article an index to the attached media? Not even "just", but "at all"? Is this the right abstraction? Or do we have a better one, achievable by simply removing the trailing slash?
We discuss this in the context of cruft, user friendliness, and selecting proper forms of expression, which the original article seems to be all about, by the way.
My point is that “an article” generally doesn’t consist of just a single file. Or maybe rather, that the distinction between a file or document and a folder isn’t necessarily meaningful. Physical “files” (i.e. on paper, as in a filing cabinet) actually commonly used to be folders. And there is little difference between a zip file (which are used for a number of popular document formats) and the unzipped directory it contains.
Unlike in file systems, we don’t have the directory–file distinction on the web, from the user perspective. Everything shown in the browser window is a page. We might as well end them with a slash. If anything, there is a page–file distinction (page display in the browser vs. file download). I agree that URLs for single-file downloads (PDFs, images, whatever) should not have a trailing slash.
> I don't do popups anywhere. You won't see something that interrupts your reading to ask you to "subscribe" and to give up your e-mail address.
Every time I get hit with a popup by a site I usually just leave. Sometimes with a cart full of items not yet paid for. It's astounding that they haven't yet learned that this is actually costing them business. Never interrupt your customers.
Same goes for stores. If I walk into your store to browse and you accost me with "Can I help you" I'll be looking for an exit ASAP.
> Sometimes with a cart full of items not yet paid for.
And then a week later you'll get an email "Did you forget to buy all those products we're sure you want?..."
> I don't pretend that posts are evergreen by hiding their dates.
I didn't realise that hiding dates for the illusion of evergreen-ness was a desirable thing!
On my personal site I added dates to existing pages long after they were uploaded for the very reason I wanted it to be plenty clear that they were thoughts from a specific time.
For example, a bloggish post I wrote where, while I still think it's right, it now sounds like linkedin wank. I'm very happy for that one to be obviously several years old.
Supposedly it’s an SEO thing. The theory is that Google is biased towards novelty and so penalises older articles (although I’m not sure how removing the date would help because surely Google would still know how long the article has been online for.)
I have no idea how true that is but I remember hearing SEO folks talk about it a few years back.
Some content mills seem to display a date but automatically update it periodically. Sometimes you can outright see it can't be correct since the information is woefully out of date or you can check from Internet Archive that the content is the same as before but with a bumped date.
A web page can be looked at as a document or an application. Those publishers that use it as a document produce a wholly better experience for reading text. There's a lot of pressure to go down the application path, even Wikipedia has done it, which sucks big time.
I think many of these are just design trends. As in, I think in a lot of cases web designers will add these “features” not for a deeply considered reason, but simply because that’s the thing everyone else seems to be doing.
I’ve had to be pretty firm in the past with marketing teams that want to embark on a rebrand, and say however the design looks, it can’t include modal windows or animated carousels. And I think people think you’re weird when you say that.
> I’ve had to be pretty firm in the past with marketing teams that want to embark on a rebrand, and say however the design looks, it can’t include modal windows or animated carousels. And I think people think you’re weird when you say that.
Some small businesses create websites for branding only, and get their business exclusively offline. They just want to have a simple, static site to say "we exist, and we are professionals", so they are fine with the latest in web design.
Right. What I’m suggesting is their simple static site should probably just show the content they want to show, rather than write extra code [and add additional complexity] which makes that content gratuitously slide around the screen.
Carousels exist because everyone wants their pet project to be on the home page, and no one at the company has enough willpower to put a stop to that nonsense. No one actually likes the things.
The page is so narrow, like it's made for a vertical smartphone screen. That's ANNOYING!
Look up any typographic manual and you'll learn that you can't make lines of text too wide or else people will have trouble reading them. Example - https://practicaltypography.com/line-length.html .
This is also related to why professional newspapers and magazines lay out text in relatively narrow columns, because they are easy to scan just top-down while hardly moving your eyes left-right.
I do think that vertical phones are too narrow for conveying decent text, but you also can't have completely unbounded page widths because people do run browsers maximized on desktop 4K screens.
That's true, but 60 characters is way toward the "too narrow" side of the scale. I'd fatten the page to ~45--55 em (or rem), and BTW, strongly prefer relative font-sized units to pixels, which ... are increasingly unreliable as size determinants, particularly as high-def, high-dot-pitch displays on monitors, smartphones, and e-ink displays become more common. Toto we're not in 96 dpi land any more.
I also strongly prefer at least some padding around the edges of pages / text regions, with 5--10% usually much easier to read.
I'd played with making those changes on Rachel's page through Firefox's inspector:
To my eye that improves things greatly.(I generally prefer serif to sans fonts, FWIW.)
> Toto we're not in 96 dpi land any more.
Unless you're banging directly on the framebuffer, logical pixels haven't been tied to device pixels for literally decades. CSS specifies pixels at 1/96 of an inch, a decision that goes all the way back to X11. 1rem == 16px, though this can be changed in CSS (just set font-size on the :root element) whereas you can typically only change pixel scaling in your display settings.
So yes, using rems is better, but pixels are not going to get dramatically smaller on denser displays unless the device is deliberately scaling them down (which phones often do simply because they're designed to be read up-close anyway)
My experience, for decades, has been that ems / rems are almost always preferable for scaling anything that's relative to text: body width, margins, padding, etc.
It's also possible to scale text itself to the reader's own preference if any by setting the body font size to "normal". Assuming the reader has set that value in their browser, they get what they expect, and for the 99.99966% percent of people who go with their browser's shitty default, well, they can zoom the page as needed.
(Most people don't change defaults, which is one key reason to use sane ones in products and projects.)
Sites which use px or pt (o hai HN) for scaling of text or fonts absolutely uniformly fail to please for me.
(See my HN madhackery CSS mods links in my profile here, that's what I'm looking at as I type this here. On my principle e-ink browser, those aren't available, and I'm constantly fiddling with both zoom and contrast settings to make HN usable.)
Making pixel-based styling even more janky by not being actual pixels any more seems ... misguided.
That research may be true, but the layout of the page should be up to the user, not imposed by the developer. If I want my browser to display a web page using the entire maximized 4K browser window, that should be something 1. I can easily configure and 2. web developers respect, no matter what the "typographic researchers" think.
You might be more sophisticated than the average reader. Less sophisticated readers will just navigate away instead of messing with settings they don't understand.
Well, that site also has this: https://practicaltypography.com/columns.html
The style of the page can use CSS column properties to make use of the width of laptop/tablet displays, instead of defaulting to ugly "mobile size fits all" templates.
While an interesting post because of the number of examples provided, this does read like somebody patting themselves on the back for building a website like it's 1995, when websites were not designed with the intention of making money or acting as a lead gen funnel.
Let's have a look at the websites she's helped build at her job and see how many of those old web principles were applied.
You can make a profitable text-only website, e.g. Craigslist.
But not everything on the web should be for profit.
One of my gripes with venture capital is that, were I to accept a large amount, I would be required to do all of those annoyances as part of the marketing plan they would impose on me.
And I feel a lot of those measure have been unnecessary - thinking back to time at enterprise software product vendors, they had myriads of those kind of annoyances to track "engagement" on their page.
The actual customers? Basically the big banks, in one case. Just how much were all those marketing/tracking cookies and scripts doing to secure those sales leads? Each bank had, essentially, its own dedicated salesperson/account manager - I don't think any bank is picking a vendor because one website had more marketnig tracking scripts on it than another.
I agree with pretty much everything (and have implemented my own blog like it), but I would like to expand on a few things:
> I don't load the page in parts as you scroll it. It loads once and then you have it.
Lazy loaded images are helpful for page performance reasons, as done by <img loading="lazy">. I have a script that flips one to eager (so it loads immediately) every few seconds depending on page load speed, so that it you leave the page alone for a while, it fully loads.
> I don't put godawful vacuous and misleading clickbait "you may be interested in..." boxes of the worst kind of crap on the Internet at the bottom of my posts, or anywhere else for that matter.
Most of the posts on my blog[0] are whatever videogame that I was just playing. Often, I'll play through installments in a series, or mention one game while talking about another. While I litter the text with links back to previous entries, I feel that it would be helpful to have a collection of these near the bottom of the page. How else would you know that I've written about the sequel? (I don't like to go back old posts and add links to newer stuff like that.)
I have a "you might be interested in" section. My algorithm: do a search on the post title (up to the first number or colon, but can be customized), then add recent posts from the category you're looking at. Limit 6. I feel that genuinely shows everything relevant that I got and not be 'misleading' or 'clickbait'.
> I don't force people to have Javascript to read my stuff.
Agreed! JS should be used to enhance the experience, not be the experience. This mindset is so baked into how I write it, that most of my blog's JS functions have "enhance" in them.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Didn't we learn anything from Snowden? The NSA has recorded your receipt of this message.
[0] https://theandrewbailey.com/
A couple more:
- She doesn't change the color of the scroll handle to make it invisible.
- She doesn't override my browser's font size, making the text too small to read.
- She doesn't configure the page to <expletives deleted> disallow pinch-zooming on mobile.
Recently some webpages started failing with an JavaScript error message, something is not supported even if my JS is activated. Probably the device which I used was „outdated“.
Obviously the webpage and its full text content is shown to me for 3s before the error message appears and blocks the access.
Is there a way to instruct browsers, when available, to just go right into reader mode? I wonder if when your page is so minimal like this one, you may as well just do that instead.
Or I guess at that point, you just don’t do styles?
Of all the things some people don’t do with their webpage, I’m the biggest fan of not doing visual complexity.
Brave has this.
brave://settings/?search=Speedreader
https://support.brave.com/hc/en-us/articles/360045031392-Wha...
My iOS Safari has it. I turned it on for the NYT, because I wanted a dark theme and then turned it off again because I realized that I like what they do with their pages (still have an ad blocker turned on though, because subscribers still see tons of ads).
My favourite pet peeve is opening an interesting looking link in a new tab, and then when I finally get around to looking at the tab, it's just a giant overlay with a prompt for an email address, with no hint of what I was actually trying to read.
Letting marketing folks on the internet was a mistake.
The one annoyance inflicted is the pointless container-for-everything with rounded corners. It makes the web page optically smaller on mobile and seems to serve no purpose.
Just extend the background to the very corners like hacker news does!
> I don't do some half-assed horizontal "progress bar" as you scroll down the page. Your browser probably /already/ has one of those if it's graphical. It's called the scroll bar. (See also: no animations.)
Sadly, I would argue that this is inaccurate. Especially on mobile browsers, the prevalence of visible scroll bars seems to have dropped off a cliff. I'll happily excuse the progress bar, especially because this one can be done without JavaScript.
JS progress bars also generally show you your progress through the main-content div or whatever, so even if they have a particularly egregious footer (I've seen footers that are over 1000px tall, with embedded youtube videos), the progress through the actual content is still somewhat faithfully reported.
Better would be to ditch the absurd footer, but still.
Android Chrome at least has a scrollbar visible as you scroll
actually bookmarked since Rachel has mentioned several annoyances that it is easy to accidentally include even if you have the best of intentions. Wish she gave this in checklist and categorized form instead of long-text.
LOL'ed at "dick bar" - seriously that thing is so annoying.
I just disable CSS/JS/etc, the web is much nicer when you do that.
Legitimately curious, what interesting websites are usable without CSS?
I do my best to follow these as much as possible, though I think I am still doing a couple. I just haven't updated my own website in quite a while :/
Being nitpicky, and since the article itself focuses on things not inflicted on users, here are a few things it still inflicts on users:
- Changing line-height.
- Changing fonts (or trying to, if it is allowed in a web browser).
- Changing colors (likewise).
- Changing body's max-width, margins, paddings.
- Adding a mostly useless header.
I find these less annoying than the ones listed in the article, and they are easily mitigated by the reader view, disabled CSS, or custom global CSS, but there they are.
I used to agree with you, but a pure text web looks like Gemini, which I abandoned after a few days of getting lost in endless identical looking blogs.
There is no reason that websites shouldn't have room for some creative expression. For as long as writing has existed, images, fonts, spacing, embellishment, borders, and generally every imaginable axis has been used as additional expression, beyond the literal meaning of the text.
The body width is necessary because web browsers have long since abandoned any pretense of developing html for the average joe. It is normal to use web browsers maximized, so without limiting the body width the text is ridiculously long and uncomfortable to read.
I think this is beyond being nitpicky.
She also avoided the other extreme - white monospace text on a black background. Some people seem to think that looks cool.
Has anyone built a search engine that indexes only "no annoyances" web sites?
Not exactly this but https://search.marginalia.nu/ will probably return sites that match these criteria
Kagi Small Web (the index is open source) comes to my mind, even though it doesn't specifically focus on UI annoyances.
https://blog.kagi.com/small-web
It's great that some people are fighting back against this. But it's too late. The modern web is unusable without browser extensions or ad/annoyance blockers.
I agree with pretty much everything on that page except:
> Web page annoyances that I don't inflict on you here / I don't use visitor IP addresses outside of a context of filtering abuse.
This point bit me personally about 5 years ago. As I browsed HN at home, I found that links to her website would not load - I would get a connection timed out error. Sometimes I would bookmark those pages in the hopes of reading them later. By accident, I noticed that her website did load when I was using public Wi-Fi or visited other people's homes.
I assumed it was some kind of network routing error, so I emailed my Canadian ISP to ask why I couldn't load her site at my home. They got back to me quickly and said that there were no networking problems, so go email the site operator instead. I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive. She quickly unbanned my IP or some range containing me, and I never experienced any problems again. And no, I never did anything that would warrant a ban; I clicked on pages as a human user and never botted her site or anything like that, so I'm 100% sure that I was collateral damage for someone else's behavior.
The situation I saw was a very rare one, where I'd observe different behaviors depending on which network I accessed her site from. Sure, I would occasionally see "verification" requests from megacorps like Google/CAPTCHA, banks, Cloudflare, etc. when I changed networks or countries, but I grew to expect that annoyance. I basically never see specific bans from small operators like her. I don't fault her for doing so, though, as I am aware of various forms of network and computer system abuse, and have implemented a few countermeasures in my work sporadically.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Agreed, but I would like HN users to submit the HTTPS version. I'm not doing this to virtue-signal or anything like that. I'm telling you, a number of years ago when going through Atlanta airport, I used their Wi-Fi and clicked on a bunch of HN links, and the pages that were delivered over unsecured HTTP got rewritten with injections of the ISP's ads. This is not funny and we should proactively prevent that by making the HTTPS URL be the default one that we share. (I'm not against her providing an HTTP version.)
As for everything else, I am so glad that her web pages don't have fixed top bars, the bloody simulated progress bar (I like my browser's scrollbar very much thank you), ample visual space wasted for ads (most mainstream news sites are guilty), space wasted mid-page to "sign up to my email newsletter", modal dialog boxes (usually also to sign up to newsletter), etc.
> As I browsed HN at home, I found that links to her website would not load
Thanks for mentioning this, because I was having the same issue and I was surprised no one was mentioning that the site was (appeared to be) down. Switching to using a VPN made the post available to me.
> use the HTTPS version
It's probably reasonable to use HSTS to force https-aware browsers to upgrade and avoid injection of all the things she hates. Dumb browsers like `netcat` are not harmed by this at all. But even then ... why aren't you using `curl` or something?
> It's probably reasonable to use HSTS to force https-aware browsers to upgrade and avoid injection of all the things she hates.
There's a broad spectrum between a browser that is "aware" of https and a browser that has all the cipher suites, certificates, etc to load a given page.
If a browser does not support modern TLS (SSL), it probably also has unpatched security flaws. Unpatched browsers should never be used on the Internet because they will get hacked.
Sure but as a server operator, who cares? I already have zero trust in the client and it's not my job to punish the user for not being secure enough. If they get pwned, that's their problem.
Unless I'm at work where there's compliance checkboxes to disallow old SSL versions I'll take whatever you have.
If you serve insecurely, that means allowing downgrade attacks and malware injection for clients who are trying to do the right thing.
At least if you use HTTP it is blatantly insecure.
Everyone using HTTPS protects everyone. Having some operators choose to not migrate to HTTPS-only websites makes the web less secure by increasing the surface area of attacks on users.
Such a good list, I may have to copy it for my own site and stuff it somewhere as a "colophon" of sorts. Maybe this kind of thing should even be a machine-readable standard...
Rachel, I'm curious as to your mentions of 'old posts' that may not be compliant, e.g. missing an alt attribute - is this something you've considered scanning your html files for and fixing?
Over the last year I’ve gotten a couple of offers from PCB manufactures to make my projects in exchange for a review and visibility in my projects and on my site. It was tempting, but every time I thought about doing it, it felt off.
I really like writing to readers and not obligating them to anything else. No sales push, no ads, no sign ups. It’s nice that it’s just what I wanted to share.
Obligatory take on web design process from The Oatmeal:
https://theoatmeal.com/comics/design_hell
.
I'm not affiliated with the Toast. But invoking this cartoon, I occasionally describe a web design as "Toasty".
Ah, but that's just the stylistic churn. There's also monetization, illustrated by this dark and terrifying tale, best-viewed in a desktop browser for proper impact:
https://modem.io/blog/blog-monetization/
Previously I've used the "disable styles" shortcut key in the Firefox web developer extension to make unfriendly websites more tolerable. Today, I wish Chrome had a shortcut key for enabling reader mode to do the same.
I use this bookmarklet to disable all CSS styling:
Disagree with a few of these
There's nothing wrong with progress. Expecting a user to have a JavaScript enabled browser is reasonable
You don't expect an online retailer to accept mailed-in cash, do you?
Progress bars are annoying because that’s what scroll bars are for, and because horizontal progress bars (a) have the wrong orientation and (b) look like a loading/download indicator.
> Safari recently gained the ability to "hide distracting items"
I just looked into this feature and it looks awesome! Is there a way to do this in chrome? If not, are there any available chrome extensions that do this?
Is there a way to do anything in chrome now? It became your personal google port and will soon disable any content-modification for the sake of your adsecurity and prinvadcy.
I really loved this. It fits me exactly and because of that I read to the bottom. Thank You
Maybe it's a combination of my Framework's 2256x1504 resolution and the 2x scaling I'm using, but some web pages' "dick bars" cover a full third of the screen - it's infuriating. It makes sites like Rachel's doubly refreshing (1).
1: And Hacker News!
Yet unlike the 20+ other websites that I can just copy the main url into NetNewsWire, it doesn’t seem to have an RSS feed…
<https://rachelbythebay.com/w/feed/>
(Under the RSS icon.)
http://rachelbythebay.com/w/2024/12/10/feed/
Perhaps you should either file a bug with NetNewsWire, or debug NetNewsWire and submit a PR so it works with her blog.
So we are now going back to special casing websites that don’t follow standards like the IE6 days?
Do you need special-casing to recognise
Using the link given from the website
https://imgur.com/a/DkifnBG
The w3.org validator says that https://rachelbythebay.com/w/atom.xml is a valid Atom 1.0 feed (https://validator.w3.org/feed/check.cgi?url=https%3A%2F%2Fra...).
It does seem like something's off about the feed. Vienna can read the file, but it comes up empty. But it doesn't seem like the problem is standards non-compliance.
This particular button is quite visible on the webpage
With NetNewsWire even when you go to URL from the link
https://imgur.com/a/DkifnBG
Other posters mentioned her IP block - I wouldn't be surprised if that was the cause since automated netnewswire traffic might easily be confused with abuse.
There have been a variety of posts semi-recently about a “feed reader score” project, and maybe NNW is particularly misbehaved?
https://rachelbythebay.com/w/2024/12/10/feed/
https://rachelbythebay.com/w/2024/12/17/packets/
NNW works for every other site that has an RSS feed and someone else just commented that while it’s a valid atom feed, when they try to use it in another newsreader, they get an empty result
I was reading the pypi blog on my tablet the other day and it had an annoying "back to top" popup that kept covering the text I was trying to read.
My website is almost as polite, the differences are:
- I don't store the date in the URL
- I redirect you to https automatically, but perhaps I should rethink that
- My Photos page lazy loads pictures, because it shows over 1000 thumbnails and it took very long to open it on the mobile phone
- Some of my posts link to YouTube videos and embed that video, so this is what comes from a different origin
Yeah, still pretty OK I think.
That's a lot of words to say "I don't need to make money from this, and really only want to publish some texta".
Everything follows from that, but not just in a bad, dark-pattern profit-optimizing way.
If you provide a paid service, you need auth, and then you damn well better use HTTPS.
If you have anything more complex or interactive than text-publishing, you'll quickly run into absurd limitations without cookies and JavaScript. And animations and things like sticky headers can genuinely improve the usability.
For the style of reading I normally do, this particular width is actively harmful to my reading comprehension. I would prefer just a bit wider text generally. This is something which the site does inflict on the reader. I agree that many sites are too wide in general, but this feels is too narrow by about 33% for my liking.
Additionally the way that the background degrades to a border around the text when using dark reader also causes problems in a similar way (due to the interaction between jagged text and a strong vertical line.
These are subtle points though, and I appreciate the many annoyances that are not there when reading Rachel's stuff.
> I don't do some half-assed horizontal "progress bar" as you scroll down the > page. Your browser probably /already/ has one of those if it's graphical. > It's called the scroll bar.
... unless you use GTK, and then it hides the scroll bar because it's sooo clever and wants to bestow a "clean" interface upon you. Yes, I'm looking at you Firefox.
`layout.testing.overlay-scrollbars.always-visible` in `about:config` is your friend.
Gee, if only there were a search engine that penalised pages and down ranked them for any of these annoyances, especially advertising, so one could get results that didn't annoy you when you visited them. oh wait, that would be a Google killer... don't want to go there...
I genuinely can't tell if this is sarcasm but yeah Kagi aims to do that
[dead]
> Now about SSL/TLS, we programmers are often forced to do this because there are many users who, when faced with the absence of the padlock on the page, don't even bother to continue for fear of having their data stolen.
I got to experience this last week: some family member uses the gmail app to consult hotmail emails. Suddenly the app started asking to reenter login information: the message looked like a fishing mail. When you clicked on it it popped what looked like the outlook openID login page but without any address bar shown. Is it the app? Some webpage? Looks like fishing.
Perfect job from the UI team: either you don't update your credential because it really looks like a fishing attempt or you get trained to use those credentials in random apps / website.
[dead]
thank you