>That said, I've never really seen a consistent argument against _protecting_ prey animals from lions (and letting lions die out).
You might begin your research into this subject with reading about the reintroduction of wolves into various locations where they had been eliminated. The prey populations always fare better when there are predators around them. Without the pressure of predators to balance the population, deer and beaver and other prey animals tend to over-populate, which results in damage to the environment (over-grazing) and then eventually starvation of the prey animals.
The deer are either going to be killed by a wolf, killed by a human, or killed by disease and starvation. At least the first two options don't also cause extensive damage to the environment before they happen.
When deer don't have predators (of whatever species) they overpopulate and die of starvation or of running in front of vehicles.
I'm not convinced that dying of starvation is better than a sudden death. The vehicle accidents harm humans as well as the deer. The hungry deer damage crops and gardens.
I think the overpopulation also causes the spread of parasites (ticks etc.) which carry disease.
>No, because other animals don't have mental capacity to truly understand the consequences of their action.
A lion knows full well that it survives another month when it kills a wildebeest and that it can feed it cubs. What more is there to "truly" understand?
When a mentally capable human kills another human, they do so with the understanding that killing that human will cause other humans to grieve the death of that human, and will remove that human's productivity from the world. They've made that connection and have decided to end that human's life anyway, so they are held responsible for the outcome of that. This is why people who are mentally incapable of understanding this are not punished the same way as people who are mentally capable.
A lion is not, to the best of our understanding, capable of understanding that killing another animal will cause grief to that other animal's family. If that other animal's family is even capable of understanding and/or grieving their death. And then we'd have to prove the killing was done with malicious intent, because just merely being responsible for someone's death, even as a human, is not immediately a criminal offense.
But it's a silly argument anyway because humans are often not prosecuted for the death of non-human animals, especially when that non-human animal is killed for food. So the idea that we would prosecute a lion for killing and eating an antelope is laughable when we don't prosecute a farmer for killing a cow for food.
Prey animals must have a right to get eaten by predators, just as the predator must have the right to prey on them. They both have evolved to balance each other out, after all.
Alternatively, prey animals would have to have the right to birth control, whereas predators would require a food program. Both would need to be provided by humans, of course, which would inevitably lead to an animal welfare state.
I have nerve damage because of a spinal problem, I walk with a slight limp not because I've lost strength but because I can't always accurately 'feel' where my foot is.
If you're using Google Analytics, you're paying for the analytics by letting them abuse your users. You can get some protection by using their anonymisation setting and signing a data protection addendum.
If you really want to be on the safe side and respect your users' privacy, the best choice is self-hosting a free analytics tool like Matomo (formerly Piwik). Of course you still need to make sure your hosting company respects your users' privacy too.
It's literally not the GDPR's fault if you're having problems protecting your users' privacy. It's the network effect of entire ecosystems of companies never having had any concern for privacy over decades. Don't complain to the EU, complain to companies not implementing privacy by default and only trying to shoe-horn it in as an afterthought.
And they can have easy ways of tracking popularity. You can use Google Analytics and similar products in compliant ways, and the tools for that have existed for ages (at least in the case of GA).
You seem to misunderstand that you're entering into a partnership with a third party. They become a data processor on your behalf. They process a whole lot more, and you have access to a lot of that data. Fortunately, the social media platform includes the privacy policy and consent process in their onboarding of users, so you don't need to worry about it for the purposes of social login.
> We used our websocket service to publish updates to all the clients.
I used /r/place from a few different browsers with a few different accounts, and they all seemed to have slightly different view of the same pixels. Was I the only one who experienced this problem?
When /r/place experiment was still going, I assumed that they grouped updates in some sort of batches, but now it seems like they intended all users to receive all updates more or less immediately.
Yeah, we went into it a bit in the "What We Learned" section, but that was most likely during the time we were having issues with RabbitMQ. I believe it was mostly fixed later on, but either way, we found a new pain point in our system we can now work on.
Surprised you're using RabbitMQ. It's one of those things which work great until they don't (clustering is particularly bad), and then you have almost zero insight into the issue, and have to resort to the Pivotal mailing list.
Have you looked at NATS at all? We're using it as a message bus for one app and it's been fantastic. It is, however, an in-memory queue, and the current version cannot replace Rabbit for queues that require durability.
i've been using rabbitmq heavily (as in, the whole infrastructure is based on two rabbitmq servers) for a long time and i've never seen it fail.
tbh, i never used clustering (because it's one of the shittiest clustering implementations i've ever seen) but we do use two servers (publishers connect to one randomly and consumers connect to both) and it seems to handle millions of messages without any issues.
of all servers i've ever used, rabbitmq is by far the most stable (together with ejabberd).
RabbitMQ is decent if you don't use clustering (which, I agree, is shitty). I have some quibbles with the non-clustered parts, but nothing big.
Right now, the main annoyance is that it's impossible, as far as I understand, to limit its memory usage. You can set a "VM high watermark" and some other things, but beyond that, it will — much like, say, Elasticsearch — use a large amount of mysterious memory that you have no control over. You can't just say "use 1GB and nothing more", which is problematic on Kubernetes where you want to pack things a bit tightly. This happens even if all the queues are marked as durable.
yeah we have dedicated machines to rabbitmq because it's basically memory hungry. but i like it that way because it's only going to crash if the machine crashes.
Note that NATS is currently pub/sub, which is a "if a tree falls in the forest" situation. Messages don't go anywhere if nobody is subscribing.
So it's awesome for realtime firehose-type use cases where a websocket client connects, receives messages (every client gets all the messages, although NATS also supports load-balanced fanout) for a while, then eventually disconnects.
NATS is ridiculously fast [1], too.
There's an add-on currently in beta, NATS Streaming [1], which [2] has durability, acking/redelivery and replay, so covers most of what you get from both RabbitMQ and Kafka. It looks very promising.
I experienced this as well. I have a different account logged in on mobile than what is logged in on my desktop. I wouldn't say things were drastically different, but when there was a location with a ton of activity (like OSU or the American flag towards the end), I saw different views between them.
Also, like some people suggested, it'd be nice to have an option for running the game at a higher speed.