Hacker News new | past | comments | ask | show | jobs | submit | ag_47's comments login

Basically, a router somewhere in Russia claimed to be the owner of some ip addresses belonging to Google, Facebook, etc. Other neighbouring routers began forwarding packets from actual users to this router. The packets contain the HTTPS requests people were making to these sites.


Bit late to the party but what could be done with the information that they got their hands on?


Yea they show ads in between game sessions.


FWIW, as a mostly Android user, the latest Oreo update was pretty terrible as well. Its all about adding new "features" just for new features sake isnt it.


Note that this is intended and desirable for any server with moderate to high loads. Plus, Nginx can reload its configuration gracefully without dropping connections.


For the uninformed: The reason .htaccess is not desirable is that the server has to check each directory in the request path for an .htaccess file and read its content.


What's the actual effect of this? Negative filesystem lookups are cached, and I believe they're cached forever on local filesystems (since you know if anyone's writing to the local filesystem), so the overhead is just that of a couple of syscalls. That feels like it's small enough that you really want to benchmark to see if it matters for you.


No effect, on my server, a $5-per-month Digital Ocean droplet. Using the program called Apache Bench (ab) to retrieve a file four directories deep: /a/b/c/d/e.html

With .htaccess files turned off (AllowOverride None), I got 282 requests per second. With .htacess files on (AllowOverride All) I got 286 requests per second (Yes, actually a few more hits per second with .htaccess turned on, but the results swing from test to test by a few percent anyway).


I've been greeted by obvious bugs in their UI too many times to count. Its the only "major" website I've ever come across with amateur mistakes. I can't recall any specifically, but +1 on not spending more than 10 seconds on it. And to be fair, the new UI seems to be cleaner so far.


Now I'm curious, Are there any websites that do something similar to /r/place? (hackathon idea?)

Also, reminds be of the million dollar front page [1].

[1] https://en.wikipedia.org/wiki/The_Million_Dollar_Homepage


A long time ago, I built http://www.ipaidthemost.com/, which is kinda related, at least to TMDH anyhow. Far, far less collaborative than /r/place, but similar in terms of staking out ownership.


Hilarious. That's got to be the most direct monetization strategy since tmdhp.



I love it. Wish this had been my idea.


Does that actually make money?


Looks like it's made $23.25 so far @ 1.50$ for the message.

If it reached 5$, he'd have made 25,000$


Your math is off -- if it reached $5, he'd have made $250. I think you were counting in cents. To convince yourself:

>>> total = 0

>>> for i in range(5, 500, 5):

... total += i

>>> total

24750


Hey you're right, my bad.


But there's all the hosting costs to consider.



Actually this idea has been around for years, and sadly isn't new at all. I just checked and there is one that's been around since at least 2006, http://da-archive.com/index.php?showtopic=42405

I remember lueshi


Right, true. Collaborative drawing was basically the "hello world" of real-time platforms back in the day (Firebase, Parse etc). But I think most of those were ephemeral canvases.


Yeah, collaborative drawing is kind of old hat, but when you can use the context of the modern, social web to provide some new modes of interaction around it, it can be interesting again. Same applies to more mundane things like text.


/r/place spawned this clone: http://placepx.com


Not quite the same, but my startup, Formgraph[0], also does public, real-time collaborative drawing. I even did a similar write-up about the stack behind it the other day[1]. Looks like they're relying on Cassandra and Redis. I went with RethinkDB. Should probably do a write-up about the front end in the future.

[0] https://www.formgraph.com

[1] https://medium.com/@JohnWatson/real-time-collaboration-with-...


Some redditors have created /r/place derivatives already. I'm not aware of one prior to /r/place but it seems impossible that it hasn't been done before



<shameless-plug> Also available on GitHub (https://github.com/8192px/8192px) ;-] </shameless-plug>


There were definitely a few that popped up after /r/place closed down. (pxls.space being the most popular) They were nowhere near as successful as /r/place though.


4chan's moot's previous startup was something called canvas.


That was not realtime nor collaborative on a single pane. It was a "remix" platform where one user created the initial drawing, then people used it as a base and drew around/over it to change it - in a new picture.

It was quite interesting for a little while.


I remember them getting a NASA contract recently. Here: http://spacenews.com/spacex-wins-contract-to-launch-nasa-ear...


Interesting.

Don't hate the player, hate game, though. welcome to the jungle.


[flagged]


You make it sound like he's murdering people to get ahead not that he is targeting a social network with content that is generally relevant and popular. The anything in this case is focused promotion.


> You make it sound like he's murdering people

You are putting words into my mouth.

He clearly wrote:

> I've made a ton of submissions in the past 8 months and experimented with various combinations of stories/titles that the HN audience might like, without always being successful. (Probably one out of every 4-5 submissions turned into an angry mob haha)

20%-25% of his submissions made readers angry.

Nevertheless he kept going and tried, by his own admission, to micro-optimise his submissions to get visibility.

Maybe my sense of ethics is different than yours, but please don't put words in my mouth.


Given the number of posts on HN these days that seem to spur a mob of angry people I'm not even sure 20-25% is a bad figure.

You said he's willing to do anything when what you meant was he is willing to work on optimising his work to better appeal to a specific target community. How does that negatively impact there community exactly?

My sense of ethics precludes me from creating a temp account to mask my own identity while attacking someone else online so I guess you're right that our ethics differ.


so by your logic he should not do whatever he can to create the most effective submission possible?

That seems counter-intuitive since really all HN cares about in the end is quality content.


> so by your logic he should not do whatever he can to create the most effective submission possible?

Ideally without breaking the HN rules.


> even though there are better solutions for reverse-proxies out there... Apache Traffic Server to even Apache httpd

Really? Completely disagree. Just look at the configuration files for these vs nginx.


So your total criteria for what constitutes a "better reverse proxy" isn't performance, isn't lowest latency, isn't full HTTP compliance, isn't dynamic reconfiguration, or pluggable load balancing mechanisms but rather configuration files??

Sorry if I don't hold your opinion to that high a standard in that case.


Configuration files are important. You understand that once you have to untangle thousands upon thousands of lines of apache configuration. Luckily for me, that can pay a decent rate :D

In the context of load balancing, on the performance + latency + load balancing mechanism + configuration files criteria, Apache is the worst by a huge margin compared to both HAProxy and nginx.


"performance + latency + load balancing mechanism"

Maybe 5-7 years ago then yeah... maybe. Not even close today. Apache has lowest latency and faster total transaction time based on various benchmarks. It all depends on how you are using it.

"configuration files criteria"

Got me there. But then again, 2.4 adds a LOT of ways to even streamline that, like mod_macro, mod_define, etc...


Performance + latency => I dunno what world you live in. Apache is still stuck in the prefork era (not that it's mandatory but it is how it works most of the time). It's not even playing in the same order of magnitude.

Load balancing => Apache doesn't even support healthchecks. I won't even get into the lack of TCP/TLS support or the lack of some load balancing algorithms.


I don't know what world you live in, but Apache only runs prefork if you configure it to run in prefork. Saying that is "how it works most of the time" is complete and total nonsense. I don't even know how to parse that...

Also complete and total nonsense is the lack of health checks (which is, iirc, only available for paid nginx), TLS support and load balancing algos. I think nginx has some kind of hash LB method that httpd doesn't, although httpd has round-robin, byrequests, bytraffic, and bybusyness.

Wow.


There are numerous modules and setup stuck in prefork mode. And the alternative with workers is a joke compared to the event loop of HAProxy and nginx.

HAProxy > nginx > apache

Of course if you compare apache to nginx, you can find stuff where nginx is lacking too.

Agreed, a lot of critical features are stripped in the open source nginx.

TLS = tcp with tls, not https.


"There are numerous modules and setup stuck in prefork mode"

I have no idea what in the heck you are talking about. If one must use mod_php than it is recommended that you avoid a threaded MPM, but even that is no longer 100% true; you can run mod_php and Event is most implementations with no issues at all.

"stuck in prefork mode" is a nonsensical phrase. prefork is a MPM.

Just because something is threaded doesn't make it slow. Take varnish for example. There are tradeoffs on all implementations, that's why Apache httpd allows for prefork, worker(threaded) and event-based architectures which the sysadmin/devops can choose for their own particular case. But "Oog. Event be Good. Threads be Bad" is really completely missing the very real tradeoffs of both.


Apache default is prefork, which was appropriate to run php app the way they were done more than a decade ago. It is utterly inappropriate nowadays.

For load balancing, events trumps every other mode, that's just the way it is.

HTTP and TCP balancing are inherently mono thread operations. There is no need for threading at all, multiple threads are actually decreasing performances).

In HTTPS and TLS mode, the encryption is the bottle neck. So you use one process per core (that process needs events).

HaProxy lets me have one process pinned down to each core of the system while network card IRQ are on a dedicated core. Apache can't do half of that.

We could get into how nginx and HaProxy parser are insanely optimized. Whereas apache is not and it cannot be because of the modules.

Of course, not everyone has to push 10g or 30k requests/s with their load balancers.


Apache default is NOT prefork, unless you are using something older than Apache 2.4. Of course it is utterly inappropriate nowadays, which is why NO ONE USES IT, but people love spreading the FUD that Apache is still prefork.

The rest of your "analysis" suffers from the same misinformation as this. I especially like "Whereas apache is not and it cannot be because of the modules.". I have no idea what in the world you mean by that. Why "because of the modules"?


Having modules require apache to parse a lot of information from the request, and make them available and editable in variables.

This gives a lot of flexibility but it has a performance costs.


And this differs from nginx how, exactly? Since it also "has modules"... So since nginx has modules it requires nginx to "parse a lot of information from the request, and make them available and editable in variables. This gives a lot of flexibility but it has a performance costs." ??


You're right, we should build a CloudFlare competitor based on ATS and take over the market.


As someone younger who never really used Apache, I don't see any reason to do anything with it instead of Nginx. Other than supporting "legacy" setups, whats the point of Nginx load balancing Apache? Configuring nginx is just so much more intuitive.


At $main_work, the reason is that there's a bunch of RewriteRules which last I checked simply couldn't be done by NGINX.

OTOH, Apache suffered from the "slow loris" attack, so the whole shebang ended up being nginx sitting in front of a few front-end apache instance kinds, which sit in front of a dozen or so backend apache instance kinds.

I find it interesting that although on those servers there are 12x more Apaches than NGINX, it might get counted as a server "using nginx"...

... and that's just because the whole she-bang sits under cloudflare, which reports Server: nginx-cloudflare ;)


Apache can mitigate slowloris attacks through mod_requesttimeout. I recommend using this.


Both nginx and apache are vulnerable to slowloris. To mitigate an attack like that you need an architecture with a scheduler, that kills slow connections, not a naive event loop.


Probably a lot of mod_* uses like PHP applications that haven't been migrated to php-fpm or something, and JBoss/Tomcat/Websphere/Weblogic websites. You of course can just proxy all of these things with nginx, but it's probably not worth it for most companies.


PHP is definitely a huge chunk of the reason why nginx has taken over Apache. php-fpm with nginx is the defacto standard, and PHP is far more prevalent than a lot of people think. Apache's mod_php(X) vs. nginx+php-fpm isn't even a debate. If someone is currently using Apache+mod_php, they probably have a smaller product that will eventually have to switch to nginx+fpm in order to scale.

While I imagine PHP is the single largest reason, other languages that support or expect the use of fastcgi are also very easy to configure with nginx, whereas I can count on one hand the number of businesses I've seen using Apache's mod_fcgid.


I'm probably about 2 years out of being bleeding edge, but php-fpm&nginx are far from defacto standard. At least when you look at how the web is being served at large, looking at cPanel/WHM.

I don't believe cPanel/WHM even supports nginx yet as a standard option.


Not to be disrespectful - I know that cPanel and similar have their place - but no real business that expects to have a presence is using cPanel or any other "easy setup".


fpm is being dropped now that php 7 contains many of the improvements that made fpm popular. php_mod+apache with .htaccess turned off is the faster stack. Put a nginx server to server static content in front and that's the fastest stack going forward


FPM isn't just popular because of speed. Its popular because of pools and the fact that its not a giant security risk by having it installed. mod_php shares permissions across everything it executes. If you have any site on the same Apache stack as another, they're accessible to each other as far as PHP is concerned. This makes the attack surface of a website significantly larger unless you're hosting exactly one site you have locked down to one directory.

I also really doubt that php7.1 mod and apache without .htaccess is faster than nginx and php7.1-fpm under 'ondemand' mode. Even a 5$ DO server can handle hundreds of requests a second to big frameworks like Drupal or Mediawiki, and they're securely seperated. Locking down permissions on a group level to the executing php-pool, so you can then make only specific users belong to that pool and bind a directory in their home to the actual website location.


FPM is being dropped? This is news to me, where can I read about this?


As people upgrade many are choosing PHP 7 through mod_php

The below are links to benchmarks and discussions around mod_php vs. fpm. These are from last year 2016. Fast forward to today; I am seeing people move to php 7 and move back to mod_php. I believe we are at the start of a movement. Articles/stories will follow but only after the fact.

https://www.symfony.fi/entry/symfony-benchmarks-php-fpm-vs-p...

https://www.reddit.com/r/PHP/comments/4bi9a4/why_is_mod_php_...


The first link is about PHP-PM, which is not mod_php, and is a new and unproven stack. The second link is a completely bullshit "echo 'Hello World';" with 100 concurrent requests - that benchmark is offering the stereotypical, utterly meaningless, metric.

The fact is that Apache + mod_php will keep an instance of the PHP interpreter active in every single child httpd process. With nginx+fpm, your static assets are served directly from nginx without the overhead of an unnecessary PHP interpreter loaded into that process, while only your PHP requests are funneled to FPM. The performance overhead of having a PHP interpreter loaded into the process that is only serving a static asset is astronomical.

At the end of the day, benchmark your shippable product. Never try to benchmark a "Hello World" or a Wordpress installation if you're not shipping a Hello World or Wordpress codebase. Purely based off professional experience, I have never seen a real-world app perform better on Apache+mod_php than on nginx+fpm.

The only thing PHP 7 gave us was essentially the ability to ignore HHVM as a "required performance booster". 90% of companies were already able to ignore HHVM; with the improvements made to PHP 7, it's now 95-99%+ of products that don't need to evaluate HHVM as a mandatory alternative. And yes, nginx+fpm is still the defacto standard for PHP 7; the links you have provided do not say any different.


I can only speak for myself, but I'd rather avoid the Apache HTTPD stack altogether. If you run multiple PHP FPM pools, it's nicer and easier to only recycle individual pools as necessary (rather than the whole process). Important to those of us who run lots of microservices (or even lots of PHP sites) on single hosts.


nginx is much more simple than full-service servers like Apache. Which is good if you want to do something easy fast (like terminate TLS, proxy, load-balance, simple redirect, simple header munging, etc.). And not good if you want to do something more complex and get into learning how nginx rewrite rules really work (totally not obvious), how if and other predicates really work (multiple articles in docs suggest it's not obvious at all) and what limitations are needed to achieve the simplicity and quickness. So if you want your webserver to do something complex, you'd go for Apache. But may still put nginx in front for LB, static content, pre-cache TLS, etc.


A good reason to use nginx (or better HAProxy), is to stop people from writing endless mess of redirect and rewrite rules.


Not many people do it just for fun. Stopping people from doing what they need to achieve job being done is usually not the most productive idea.


What people need to do to get their job done is very, very, frequently to work around an existing mess with new hacks that make the mess even harder to clean up. And if that's what they need to do, then they should do it.

But we should also ask ourselves how to get into such messes less often. That is, how to systematically reduce the number of early-stage design errors. One trick is to choose tools that forbid known anti-patterns.

That means the designers must work harder up-front to figure out a system that can do without the work-arounds. But that is a feature, not a bug; indeed that is what our processes should try to achieve.


There are very few justified usages that require to write maze of rewrite rules.

Most people do it because they have no idea what they are doing and they never decided on a naming convention for their apps and domains.


What are some justified uses of rewrite rules?


Making broken links pointing at your site work (301), without breaking links to the correct URL.

Especially links on sites you have no control over.


Fair point, but,

> if you want your webserver to do something complex, you'd go for Apache

I would tend to disagree. Assuming "complex" = "business logic", Apache hardly seems the right choice. PHP/Python/Node/GoLang or Lua right inside nginx would be more appropriate in most cases, imo.


There are degrees of complexity, there's a kind of spectrum even. If you want a full-blown business logic that requires language like PHP or Go, it's insane to try and make Apache do it. If you need a set of simple rules that are within what Apache (including there all the module ecosystem) can and is designed to do, it would be a big mistake, costing a lot of scalability, to deploy high-level language instead. Right tool for the job, always.


> if you want your webserver to do something complex, you'd go for Apache

> a set of simple rules that are within what Apache (including there all the module ecosystem) can and is designed to do


Again, there are degrees of complexity. Very simple - nginx, kinda more complex - Apache, somewhat complex but still doable without using Turning-complete language - third-party Apache modules, needs Turing-complete language or you're wasting your time - Python/PHP/Perl/pick your poison.


Shared hosting setups where you want .htaccess support (or something comparable, but same basic issue: requires some additional layer to validate and generate a centralized nginx configuration, or some other extra layer, with Apache it is built in and well-documented).

WebDAV support.


As far as htaccess support, from the couple Shared hosting providers I've used, lighttpd has been the server of choice.


A: Features.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: