Hi, sorry for the issue, the instance Redis.io runs on is very small and has only 1GB of memory, so the OOM killer killed the Redis instance. Normally this was not possible, because redis.io itself uses just a few keys, but lately I installed try.redis.io there, and apparently this uses a lot more keys, I'll have to monitor the keys usage and change the code in order to flush the old sessions faster. The current instance is costing 5$/mo at Digital Ocean, maybe I sent too cheap on this :-D But the idea was that I own everything and don't ask for expenses about Redis OSS official stuff, this was the idea back then. Now no longer makes sense in the new setups but still...
EDIT: just to be a bit safer I set a memory limit in the instance as well.
Btw if you see this as being not very professional, well it is :-D The website was setup by a friend of mine in an afternoon once we upgraded the site to use SSL. I have access and do sysadmin tasks without knowing very well how it is configured (it uses systemd for instance, while I setup things differently usually in my machines). The instance was configured without even a proper configuration file, just a few parameters on the command line. The thing is: we handle this as a "community" thing. And after all considering the near zero-efforts, and the 5$ instance for many many years, it kinda works well.
Anyway all the content is public and open source... (see redis-io and redis-doc repositories), and all our releases are tagged on Github. So the site is not vital in case it goes down for some short time.
We don't have anything to hide and having the error apparent leads to faster resolution. There are at least two other people other than me having access to the VM, so that if I sleep and they see a problem, they react. This way is simpler.
Thanks for the argument-less down vote but it still is a legitimate question. I did not see the stackstrace. But then again, the site seeming relatively static/read-only from an outsider perspective, the environment variable might not contain any secret or password.
This is a bit awkward, but a good reminder that actually 'embarrassing' show stopper bugs aren't necessarily as bad as you think.
As a result of this, here we all are, talking about and thinking about redis. I know I wouldn't have otherwise. It's funny, but stuff like this can turn out to be a net positive once you're over that feeling of 'argh, production went down!' :-)
italian servers are facing major issues because a farm got contaminated, and now all of our bits are waving their hands and cursing god (the italian way)
EDIT: just to be a bit safer I set a memory limit in the instance as well.
Btw if you see this as being not very professional, well it is :-D The website was setup by a friend of mine in an afternoon once we upgraded the site to use SSL. I have access and do sysadmin tasks without knowing very well how it is configured (it uses systemd for instance, while I setup things differently usually in my machines). The instance was configured without even a proper configuration file, just a few parameters on the command line. The thing is: we handle this as a "community" thing. And after all considering the near zero-efforts, and the 5$ instance for many many years, it kinda works well.
Anyway all the content is public and open source... (see redis-io and redis-doc repositories), and all our releases are tagged on Github. So the site is not vital in case it goes down for some short time.