After a U.S. federal contractor told us they loved Plane but couldn't use it due to ITAR requirements, we spent 6 months building a truly air-gapped version. No external connections, no license pings, no telemetry, everything runs in complete isolation.
The interesting part: our air-gapped deployment actually runs faster than our SaaS version. Turns out when you eliminate all network latency, things get snappy.
This post covers the technical challenges we solved (supply chain trust, 2GB bundle size, offline licensing) and why regulated industries need alternatives to cloud-only tools like Jira.
I don’t get it, the dependencies are either needed or not. If needed that are either pulled from a project or written. So how are dependencies evil , is the rage against feature bloat pulling in dependencies ? Then the issue is the bloat
Functionality is either needed or it isn't, but it doesn't need to come from an external dependency. When it does, it probably comes with functionality you didn't need too. And as soon as you have a compile/runtime dependency on external code, your compile/execution environment needs to always have access to third party code. So that's bloat and complexity. You also give up control. Hopefully it ends up saving a bunch of time over developing it internally.
Hopefully an upgrade to an external library doesn't end up including another dependancy that happens to include some backdoor that steals all the credit card information in your database. Or a crypto miner in frontend code. Or introduces a bug that stops people from being able to checkout. Or the money package starts calculating slightly differently than your payments provider... Etc. etc.
instead of CI/CD pipelines and a million dependances, why don't we just put all of the containers, like, on one single VM? and just make it a linux box or whatever?
The usual way to deploy such things is actually to create 1 VM for that application, install podman, and then run all those tons of containers in that VM. Because you cannot trust software vendors to not do or require stupid shit like requiring the docker socket, mounting overly broad volumes from the hosts filesystem, provide working and non-stupid compose/helm/...-files and things like that. Often the support contract also requires a specific version of a specific OS, a specific kubernetes distro or something like puppet/chef/... for deployment. Since for the multitude of software vendors and requirements, we couldn't easily fulfill all those at the same time on the same kubernetes cluster or infra, we just split it up into VMs.
- it is not at all surprising that when you remove cruft, code performs batter
- it is not at all surprising that this is not common enough amongst software engineers to even consider these things (competing business interests probably cause this often)
Not being connected to the work VPN already slows down my Windows to a near halt since a few unreachable network drives is all it takes to make Explorer go unresponsive.
Seems like engineers forget to test these things nowadays.
Totally agree. Why pick the one negative thing to say instead of saying “this should be done more often” for example. Just aggravating, as a behaviour.
Once again going full-circle with the industry reinventing self-hosted software. Excuse my cynicism, I'm going back to minding my own business (reinventing design systems / component libraries, lol)
> Turns out when you eliminate all network latency, things get snappy.
Same experience with JIRA. I read all these negative comments here and elsewhere about how slow and clunky JIRA was, and I couldn't relate at all.
Then I realized all those who complained was using JIRA Cloud and we were using on-prem, and it all made sense.
We've since moved to JIRA Cloud ourselves, and I understand now.
We moved and none of the new places had any viable computer room, so literally had to put the rack in a closet And well, that ain't cutting it for physical access control these days. Thankfully we have very simple flows without any BS, so not too many 1-5 second clicks to get things done.
Just open the network tab and refresh a page in Jira and you will understand. It isn’t too noticeable on a LAN. Stick the internet in there and it is painful. The worst I have seen is self hosted and accessed over Netskope ZTNA. Truly an abomination.
It's not the refresh that screws you. It's the four goddamn dozen asyncrhonous calls it has to make after that refresh has completed to actually fill out the content of the page and let you click through stuff.
I have to load half a dozen tabs of new tickets and then cycle through them triaging and defining fields in a collated manner to make it so my time isn't hugely dominated by waiting.
We used to have on-prem and it was probably about an order of magnitude better, but still nowhere near "XP in a VM accessing a site on localhost" level snappy.
> We've since moved to JIRA Cloud ourselves, and I understand now.
Jira on-prem was dog slow, yes, especially if it didn't live on the same server as the database. But Jira Cloud? It isn't much faster than that! It's a piece of hot mess. Loading placeholders everywhere. Really I have absolutely zero idea what Atlassian is doing, but I know for sure optimizing for performance is not amongst the things they are doing.
Yeah, this whole pattern of loading a million placeholders and then watching the page awkwardly snap into layout is just sad. Especially when you know that you could have shown just as much information in a "server side rendered" piece of PHP in 2005 with less latency.
I have had the opposite experience with Jira at a relatively large corporation (years ago). Our local Jira was probably just configured weird or on underpowered hardware though.
Having adopted a number of development tools, including Jira and Confluence, it’s amazing people let them sit there chugging away on underpowered machines with hundreds of users quietly complaining about the speed. Throwing some extra CPU cores and memory is so cheap for the quality of life improvement, let alone the productivity gain.
The concurrent (human) user counts at even large companies is probably a couple dozen at most.
Usually with these tools, the performance problems magically vanish if you disable all the integrations people have set up. My company is constantly denial of service attacking Jira with Github updates, for example.
I delivered a complex, highly customized enterprise back-office system for a large Fortune 500 some time back. It involved a handful of servers (all as VM's), x3 to accommodate DEV/QA/PROD staging.
It worked great in volume testing in our environment. Their IT department installed it on high end servers (hundreds of cores, incredibly expensive storage subsystems, etc) but users complained of latency, random slowness, etc. IT spent weeks investigating and swore up and down it wasn't their end and must be a software issue. We replicated and completely sanitized production volumes of data to try and recreate locally and couldn't.
Finally I flew down and hosted their entire infrastructure off my laptop for a day (I'll skip all the security safeguards, contract assurances, secure wipes, etc). It flew like a thoroughbread at a racetrack. No latency, instant responsiveness, no timeouts, no hiccups. Their entire staff raved about the difference. The results gave the business unit VP what she needed to bypass the usual, convoluted channels, and someone must have lit a fire under their IT VP - by the end of that day their internal techs identified a misconfiguration on their storage arrays and solved the problem. I can only guess how many other apps were silently suffering for weeks or months on the same array. I joked I'd be happy to sell them a laptop or two for a fraction of their mainframe cost.
I had the experience for a few years of having to run all of the self-hosted development and project management tooling for a government project about a decade back, and the integrations part holds up strong to that experience. The CI system that had been put in place was probably the most sophisticated I've ever seen, but that had some unfortunate side effects like Jenkins jobs being kicked off automatically thousands of times an hour, blasting all of the Atlassian tools with network requests, or Nessus remote logging into and spawning 40,000 simultaneous processes on the servers actually hosting the Atlassian tools.
People complaining about JIRA has become enough of a trope that it mostly gets ignored.
Also big enough corps give underpowered machines to the mass of employees (anyone not a dev, designer or lead of something) so latency is just life to them.
I contracted with a company that used an on-prem version of Jira. Their instance was painfully slow over VPN and I often wondered why they didn't add more resources thinking that this was the experience for everyone. Then I went on-site for a few days and the performance was night and day. On-prem, their Jira instance was the fastest I've ever seen, faster than the cloud version and felt snappier than Trello.
On-prem is great if everyone is coming into the office, but I think orgs should pay more attention to the "remote experience" of their on-prem tools.
My company self hosts most things, which is bad for remote employees and employees in offices other than the primary because the VPN server (or possibly their network connection) is underpowered for the number of remote users. I sometimes need to wait 45 minutes for a like 1GB clone.
Myself included, JIRA is used far too much out of the box and few people ever learn what it can actually do.
Out of the box it is pretty generic. When I learned what it could actually do, it revealed itself as a sponge that can uniquely absorb complexity. Having someone familiar with JIRA show the ropes went a long way.
Some of these new development tools are pretty nice though. Variety is good, especially with the changes from Pivotal Tracker, etc going away.
Our org used Jira on-prem for 2k engineers and 3k additional staff and it was slow as molasses.
The dialogues and context menus took forever to show and page navigation was beyond painful.
We had dedicated engineering for maintaining our Jira and Bitbucket, and they still fell over. We eventually moved back to GitHub. (Our usage went from GitHub on-prem pre-MS -> Bitbucket on-prem -> GitHub cloud post-MS.)
I hate Jira regardless of where it's deployed. It's a beast.
We run an airgapped version of JIRA (but we are a very big company, globally distributed). Performance is in the gutter.
The annoying part is the amount of garbage fixes in JIRA's UI. For example, because of the loading speed, and me losing patience with it, if I don't wait for the page to finally finish loading and click on the "create" button, then instead of the modal dialog for issue creation, I get a whole new page for issue creation. Both options are atrocious from UX perspective (because usually I need to copy text from the issue I was reading into the issue I'm creating), but at least when it's a modal window, I can pop open the developer tools and delete the modal part that prevents me from copying the text from the issue otherwise blocked from interaction.
Also, it looks like due to speed, some queries simply don't finish on time, and randomly, searches don't find all the issues they should. Especially searches that ask for "s.t. parent issue has such-and-such properties".
Ultimately, JIRA isn't built to scale (ironically, since it's written in Java, which was always defended as being slow for small problems but scaled well). The code has a lot of assumptions about some operations being fast enough to not require buffering / incremental implementation. And sometimes you hit the combos of such unoptimized operations and have to wait minutes for the program to respond.
The other thing, every pm wants a custom field just for their project, a field they’ll forget they asked for a day later. TLDR, put a governance board that’s fine saying no especially when someone inevitably pulls rank.
Apparently System-scope custom fields have a significant performance hit in Jira. I think project-scope custom fields are better.
Sometimes it feels like Jira is so incredibly configurable but is really missing the "pit of success". There is a way to make it nice to use and reasonably performant, but you really need to go into it with a strong plan. And even then it's really easy to balls it all up in short order if you're not vigilant.
They've removed it from their pricing page now, but when they announced the discontinuation of the regular on-prem server the minimum for datacenter was like 500 licensed users or something along those lines.
In any case it was clear it's not for small shops like us.
That said, air-gapped is a hefty requirement, so perhaps those customers are predominantly large?
I still run an old version on an air gapped network and will continue to do so until we're forced to change for some reason. It's not a hefty requirement; we run it for a team of < 10 developers on a small VM and it just works.
> That said, air-gapped is a hefty requirement, so perhaps those customers are predominantly large?
There are lots of very small classified networks out there with only a few dozen users.
There are a lot more user communities course that aren’t necessarily airgapped, but where they have special compliance requirements that pretty much mandate self hosting (or at least bring-your-own cloud.)
We took a different approach with Plane's air-gapped offering. No minimum user requirements at all. We evaluate based on your use case and domain requirements, not team size.
We do the similar with our B2B product (in an entirely different niche). We have everything from single-person companies up to very large ones. Similarly we set price based on use-case and requirements.
There has historically been massive investor and shareholder pressure for companies to show "Cloud Recurring Revenue" and multiple wall street analysts will start issuing higher price points for your stock based on this, and eventually large institutional investor adjust their positions accordingly.
I like the cloud for a lot of reasons. But, making your software worse to make your stock price higher seems like a loser for everyone long term.
It might as well be for the vast majority of companies, since I believe the smallest number of users you can buy support for is 500.
To be more specific, they killed off the legacy Jira Server and now only offer these enterprise versions of Jira and the rest of the suite if you won't move to the cloud.
How do you handle compliance in confirming that the product is only used for the license duration? (Or is it more of a one time purchase plus recurring fee for updates?)
At this level (govt, 6 figure+ deals) I would at least consider if this problem should have a non-tech solution, and instead have a legal/lawyer solution. In my experience (not US based though) the govt contracts are under compliance programmes as well so the govt agency’s legal/contract mgmt team would probably follow up internally on expiring contracts (ie licences) and require the owning stakeholder to either renew the contract or abandon the software. Meaning the customer would supervise itself regarding licence. But even if you don’t want to rely on self-supervision then having your lawyer spend 1 hour reaching out with a “do you need to renew your licence” at the end of a licence term would probably be much cheaper than building and maintaining an air-gapped licence solution.
Usually you do have recourse via procurement channels and reps. If you file a complaint with that agency stating that they’re using a license without paying for it, it will result in at least an investigation.
If you got to hire the cops to investigate your own mistakes, would you hire competent, motivated folks who'd leave no stone unturned and get access to every classified, air-gapped network in search of license infringements?
I wouldn't. I'd hire some Peter Gibbons type, who only does about 15 minutes of real, actual work in a typical week. Then I'd tell them they can finish early if all their pending cases are closed.
If enterprise corporations actually did a throughout investigation, they would probably find that a lot of their license deals have gone unfulfilled. They are really bad about this kind of stuff. It became super complicated to buy this kind of software once companies realized that they could force everything though a deal desk and try to extract as much money out of the government as possible.
We have had companies outright refuse to even give us a price when we told them we wanted to investigate buying a license. Such a PITA.
The acquisition and procurement departments in many government agencies are often “independent” in that they don’t directly report to the agency. They’re more like compliance people that make sure you’re completing with the procurement laws and regulations.
And unpaid software licenses are a violation.
Now maybe the client in this case may have had some kind if ownership clause, etc. but in general, procurement people tend to be pretty neutral in my experience.
Then again, over only dealt with small contracts (< $500k)
Largely agree but I want to challenge this bit at the end.
> probably be much cheaper than building and maintaining an air-gapped licence solution
I think this is an unwise attitude to take. There's something to be said for a simple picket fence. Even though someone could easily hop it if they wanted to, they lose plausible deniability and in most cases that's all that really matters at the end of the day.
It's a subscription license. We offer air-gapped deployments under the Business plan. As part of compliance, we request customers to share license logs quarterly-no PII involved. Also, the license enforces seat limits, so you can't exceed the number of users you've purchased. https://plane.so/pricing
The interesting part: our air-gapped deployment actually runs faster than our SaaS version. Turns out when you eliminate all network latency, things get snappy.
This post covers the technical challenges we solved (supply chain trust, 2GB bundle size, offline licensing) and why regulated industries need alternatives to cloud-only tools like Jira.