Yes, eu-central-1c all instances down. A lot of services responding with 503, console has problems as well (errors loading services, API calls with 503). Started 7:29 UTC.
We saw also connection timeouts to DynamoDB and Kinesis from ec2 instances.
Update from AWS:
12:08 AM PST We are investigating increased network connectivity errors to instances in a single Availability Zone in the EU-CENTRAL-1 Region.
12:28 AM PST We are experiencing elevated API error rates and network connectivity errors in a single Availability Zone. We have identified the root cause and are working to resolve the issue.
Because the poster thought it might be interesting to those who frequent the site? Considering its position on the front page at time of writing I would say he thought correctly.
How long have you being here? Random Wikipedia article makes its way to the home page from time to time. Every day, ~5 Wikipedia articles is submitted to Hacker News (https://news.ycombinator.com/from?site=wikipedia.org), some have generated a lot of interesting discussions in the past.
One of the reasons I visit HN so much is precisely because of these "off topic" articles. Very often they're either interesting, thought provoking or just plain odd/unusual. That's a great thing! I have a wide range of interests so anything that tickles my curiosity is cool. Often, I find the comments far more illuminating and interesting than the original article itself.
Things that become popular on HN are pretty much legitimate & on-topic by definition. The guidelines say "anything that gratifies one's intellectual curiosity" is on-topic.[0] Off topic are "politics, or crime, or sports" but even those can have exceptions.
Posting a "why is this here?" comment is a sure fire way to be downvoted and [dead], after which practically no one will see your complaint anyway. Don't like a story? Flag it. It's the only effective way of lodging your opinion on these things.
This may be completely unfair, but I think it's a drawback of the business model they have. When you have an "open core" that you don't charge for, you have to have something that you can charge for. If you make the "enterprise edition" performant with good UX and the "community edition" slow and clunky then you threaten to kill off your potential user base. On the other hand, if you spend all your time improving the core, then there is nothing for people to buy. The prudent thing to do is to do as little as possible on the "core" because it is essentially a marketing expense. You need to invest as much as possible in the extensions that bring in revenue.
Personally, I'm not a big fan of "open core" systems for this reason. I'd really prefer that companies like GitLab concentrated on actual services rather than trying to sell software. Having an "open core" can in some ways poison the core for outside development because you usually have to allow your code in the enterprise versions (or maintain your own forked copy). This is one of the reasons why Ghostscript never got the outside help that it really deserved (let's face it -- who uses a free software system and doesn't use Ghostscript?) The fact that nobody pays for it -- or even contributes -- was at one point a pretty sore issue for the author.
I love the fact that GitLab contributes useful free software to the world. I am disappointed that their business plan relies on selling proprietary software. I honestly believe they would be in a better place if they took a different approach, but they have always very politely disagreed with me when I've mentioned it ;-).
We tried charging for services: donations, paid feature development, and paying for support. None of them scaled and we moved to open core which allowed us to spend much more time on performance, security, installation, and dependency upgrades.
We want to make sure that the open source version of GitLab is just as performant and has an equally good UX as the enterprise version. There is no difference in the UX and there are no proprietary performance optimizations in the enterprise version.
There are some things that we see as a feature but that you could see as a performance item. An example is the SSH lookup in a database that used to be in enterprise and landed in the open source version in this release.
I was a fan, and we got an Enterprise license when it was the only tier offered.
Now there's two tiers above EES with insane price jumps.
I can only assume there will be even more tiers introduced so we decided not to upgrade.
We use GitLab as a platform for the whole company, but only a handful will use EEP/EEU features. An upgrade would be inhibitivly expensive or we must reduce the number of licenses to a fraction of the employees.
Hey, you guys are running a business and I'm running a commentary :-) This stuff is hard enough that what-ifs and naysayers are going to crop up. One of these days I should just put my money where my mouth is. I think the scaling issue is definitely a huge problem and even Cygnus said they found it extremely difficult. I'm not sure it's possible to get the ROI you need if you accept VC, so given your current position, it's not really fair for me to criticise.
Hey there! Jacob, Frontend leads here. At GitLab we've put together a team of 5 Frontend Engineers (including myself) to focus solely on performance and stability issues. We are focusing on reducing the size of our JS and CSS. We are currently focusing on splitting our one giant JS file into many smaller JS files. Here's our current issue for code splitting the JS. https://gitlab.com/gitlab-org/gitlab-ce/issues/41341
What real-world speedup will you see? There's no data in that issue to support the idea that this is worth doing. Surely someone hacked up a split version and ran some benchmarks, right? Maybe you could include that data somewhere as the closest thing to that I can find is:
> Benefits: We will have files separated. It’s going to be better.
I'll add our benchmarks into that issue shortly.
Our biggest slowdowns are parsing, layouts, and memory. Without this splitting every file is imported, functions are instantiated but not used. With our new method only files needed are bundled and most of the dispatcher.js file is removed. Removing error prone string matching from switch statements to having it automatically done by webpack.
The plan is:
1. Split up the files.
2. Get rid of the as much of the dispatcher.js as we can and have web pack do the routing dynamically. That would eliminate most of 1 large confusing file (dispatcher.js). JS is still cached for the pages you visit but it isn't 1.5mb of JS it would be ~20kb of JS.
More generally, tuning the runner's polling interval to minimize latency is also tricky, especially with multiple independent runners, and the runner doesn't handle 429s well in my experience (getting stuck in a tight retry loop without backing off sensibly, thus continuing to exceed its request limit).
Yeah, and they've promised to work on performance for ages now with almost no improvement.
I run a very small installation with only a couple of projects and some hundred issues on a 4 GB machine. It eats up 2 GB (sigh!!!) - and often still feels extremely slow. I mean 2 Gigabytes!! What for? That's a multitude of all the data I have in the DB there. And then it's not even used for something useful like caching. Some pages take several seconds to load. As a developer that's totally unacceptable to me.
Is ruby really such a mess that it's impossible to run an app with reasonable memory consumption?
> Is ruby really such a mess that it's impossible to run an app with reasonable memory consumption?
Yes.
Any non-trivial Ruby app will quickly eat up 500MB, and any non-trivial Rails app will soon balloon to 1GB, with things getting worse over time due to memory fragmentation†. Since there is no parallelism your only option is to either have more unicorn workers, for which prefork and COW are hardly working to save you from duplicating memory, especially over time, or have puma threads and use JRuby, which is a memory hog of its own and often slower than MRI.
There have been arguments made that developer time trumps CPU time [0] but there are some workloads and problem domains and uncontrollable events for which this works at the beginning yet later on you find having yourself painted into a corner as suddenly things are not sustainable because you just can't throw more hardware at the issue without going belly up[1]. Once the low hanging fruits have been reaped you're being challenged just to make your app behave within established parameters with diminishing returns, which I'm sure you'd rather spend on solving actual problems for your customers. At that point you might just as well spend the money on rewriting part or all of your app in a more frugal ecosystem and mindset[2].
† Switching to jemalloc may or may not help. Over here it did not.
Being a PHP developer myself it's really hard to believe that resource consumption is obviously treaded with so little priority in the rails/ruby world.
And some attitudes here like "who cares? memory/cpu is cheap nowadays" are in my opinion part of the problem. I'd say, well written software should use as little resources as possible. Probably a habit that comes from my early days on a C64 back in the 80s.
Replace "Ruby" with any modern interpreted language and then get over it. With the price of memory so insanely cheap, why does it matter? Gitlab isn't trying to be a lean Go microservice-powered app. It's attempting to be the centre of your source control and deployment world. So 2gb usage seems reasonable to me.
Any cloud provider will provide a VM that can comfortably run it for a very reasonable price.
It’s not just 2GB – I’m running a Gitlab with essentially 2-3 users and activity every few days, and it completely maxes an entire core and uses 6GB RAM constantly.
I'm using their official helm chart, it's just idling. Sidekiq is doing nothing, the server is serving no requests - I have no idea how it manages to spend so much time on nothing.
The helm chart "gitlab" is deprecated, the helm chart "gitlab-omnibus" is currently recommended, and the helm chart "cloud native gitlab" is going to be recommended in the near future - a GitLab employee answered me a few hours ago in this thread :)
Hi, sorry you haven't noticed any improvements. We've been chipping away at performance for the last 6 months and have made some pretty noticeable improvements in various areas of the application.
For example, the average response time of an issue page has come down from 2.5s to 750ms over the last 6 months.
We still have a lot to do, but we're getting there.
One big thing that we would love to do is moving to a multithreaded application server https://gitlab.com/gitlab-org/gitlab-ce/issues/3592 That would save a ton of memory but last time we tried we got burned because the ecosystem wasn't ready.
Maybe you are already committed to gitlab and its workflow but Gogs and Gitea are small and fast GitHub clones written in Go if you need something light-weight.
Personally I run gitbucket, which is a small and self-contained java-based "github alternative".
There are a bunch of these small project/git-hosts, and while they're easy to manage they're less featureful than the gitlab offering. Gitlab does have some great features, such as the whole integrated CI system, built upon runners & docker.
The downside is complexity, and resource-usage. I know gitlab is free, and I could install it, but the added resources and potential security issues make it a non-starter.
Apart from that you can take a look at any of the past release posts or merge
requests tagged with "performance" [1][2] and you'll see that plenty of
improvements have been made over time.
> Is ruby really such a mess that it's impossible to run an app with
> reasonable memory consumption?
Ruby is not really to blame for this, instead it's mostly Rails and all the
third-party libraries that we add on top that consume so much memory.
I can only tell what I've experienced. I regularly read the release notes, especially the sections on performance. And with each upgrade I'm hoping so much that we get back to loading times below 1 second (like it was with early GitLab releases).
Unfortunately this is almost never the case: Sometimes the pages load even slower. In the best case there's not much difference. Same goes for memory consumption.
But I understand now that this will always be a problem with rails.
Do you have any examples on what kind of pages are loading slow? Are these issue detail pages, MR diff pages, etc? It's possible you're running into cases we're not experiencing (enough) on GitLab.com, or maybe we did but we haven't really looked into them yet.
They're improving performance and UI though, the new features only go to EE* versions. The new features are only a reason for them to get new Enterprise subscribers.
Absolutely this. The amount of RAM that I need in order to run Gitlab is greater than the RAM requirements of all of the projects that I would host in a Gitlab instance combined.
They've been promising to fix performance and the UI for years now, so I wouldn't hold out much hope. It's a shame, but there are better open source products so it's not like there are no other options for self hosted.
For what it's worth, in the last year (since January 23, 2017) we've merged ~440 merge requests labeled "performance"[0]. It's not perfect right now and there's still plenty of work to do, but compared to when I started at GitLab almost two years ago it's night-and-day.
We've also got an entire team dedicated to porting our Git layer to Go with Gitaly[1], which has been a major bottleneck that we've started resolving over the last year or so.
The 2.9 git install from homebrew prompted me for keychain access and then just "worked", this morning. My first time using brew git. So you may want to give it a whirl.
I'm American. This makes sense, if anyone was going to order awhat seems like a range extender, for a device that just brings you stuff you were too lazy to type, it would be Americans.
Googleglass problem. The interface is me yelling publicly. So not super sure that is going to be adopted well.
I use them in my home. Being able to ask it to set a cooking timer while my hands are full is pretty awesome.
Echo is one of those things where it became magically awesome by being somewhat more accurate than I'd expect. Also, Amazon is updating the service back ends, and it is now extensible.
Investing directly in the market (via EFT) is always a good idea. Look at this chart, percentage change of S&P 600 SPY EFT in the last 3y. 60% increase.
It's already known since a while that breastfeeding is a reason for a higher IQ. There was a study in 2007 that the correlation between breastfeeding and the IQ is moderated by genetic variant in FADS2.
We saw also connection timeouts to DynamoDB and Kinesis from ec2 instances.
Update from AWS:
12:08 AM PST We are investigating increased network connectivity errors to instances in a single Availability Zone in the EU-CENTRAL-1 Region.
12:28 AM PST We are experiencing elevated API error rates and network connectivity errors in a single Availability Zone. We have identified the root cause and are working to resolve the issue.