Right up there with New Relic writing a blog post about how you couldn't trust Sumo Logic (and should move to NR) after they got bought by Francisco Partners, only to be bought by Francisco Partners themselves.
It was available for 6 years (2011-2017), that's hardly a failure relative to other Apple products that have only made it through 2 cycles (the most recently iPhone Mini sub-family that lasted 2 years).
It seemed like the initial Direct File rollout was limited to states that didn't have a state-level income tax, or directly cooperated with the IRS. Are they forcing all states to play ball, or will Direct File not cover state tax submissions?
"will be made permanent for the 2025 tax season with all 50 states and Washington D.C. invited to participate"
It seems very much that this is an opt-in thing, state by state. I would suspect that the results of this will be most Democratic-leaning states at least trying to opt-in for next year, and many Republican-leaning states refusing loudly during this election year, then quietly starting to opt in in a few years.
States part of the pilot program (i.e., 2023 tax year): Arizona, California, Florida, Massachusetts, Nevada, New Hampshire, New York, South Dakota, Tennessee, Texas, Washington (state), Wyoming as per https://www.irs.gov/about-irs/strategic-plan/irs-direct-file....
Looks like exactly half and half, Democratic/Republican to me ...
If you look at the tax situation across the states, the party split makes a bit more sense. Of the states involved in the pilot program, Florida, Nevada, South Dakota, Tennessee, Texas, and (to some extent) Washington do not have individual income tax. Also, Arizona and New Hampshire both have a flat rate income tax, which I readily admit knowing nothing about, but which I presume simplifies them being part of the pilot. So California, Massachusetts, and New York are the only states which have a variable tax rate and also opted in, and are also all about as Democrat leaning as it gets.
> The Direct File pilot doesn't prepare state returns. However, if you live in Arizona, California, Massachusetts or New York, the Direct File pilot guides you to a state-supported tool you can use to prepare and file your state tax return.
> If you live in Washington state, Direct File guides you to a state site where you can apply for the Working Families Tax Credit when you file your federal return with the Direct File pilot.
Essentially. Take the Visa credit card lines for example -- Visa Infinite cards have a higher transaction fee than a Visa Signature card, and the high-end travel cards will be of the Infinite variety (Chase Sapphire Reserve).
It works (and is the best option bar-none) until the big few lobby the state to outright ban municipal ISPs as happened in my state (NC). So frustrating.
It baffles me that people are in support of that kind of thing. I doubt the majority of voters are, but the fact that anybody would say "yeah we should have our utilities privately managed for profit" is absurd
> The question that would be asked, is "do you want the government subsidising and disorting the internet market"
The other question that would be asked is, perhaps less explictly, is "do you want to drive [employer of X,000 jobs in the region] out of the state?". It doesn't matter if those jobs would be replaced by superior jobs at a publicly-operated telco, or if some of those jobs are complete bloat (and therefore paid for by inflated utility bills among all subscribers). Nobody wants to engage in that argument; it's a losing battle.
Yes, there's an easy answer to that, but when faced with the option of a short-term sacrifice in exchange for a long-term collective gain, people invariably opt not to take it.
While simultaneously saying broadband is available in your area because one house in the entire census track has it. We challenged the agreement with the town and they told us that we techmically had fiber service but it would cost 50k to activate it since they had to run fiber from their pop. Coupled with informing us that once we did so, our neighbors could then hook up for 49 bucks.
> We challenged the agreement with the town and they told us that we techmically had fiber service but it would cost 50k to activate it since they had to run fiber from their pop. Coupled with informing us that once we did so, our neighbors could then hook up for 49 bucks.
If only there were a way to spread that cost equally among the people who would benefit from it, and a legal structure for collecting that payment, representing the interests of the constituent people, and ensuring that the telco held up its end of the arrangement!
My current company is split... maybe 75/25 (at this point) between Kubernetes and a bespoke, Ansible-driven deployment system that manually runs Docker containers on nodes in an AWS ASG and will take care of deregistering/reregistering the nodes with the ALB while the containers on a given node are getting futzed with. The Ansible method works remarkably well for it's age, but the big thing I use to convince teams to move to Kubernetes is that we can take your peak deploy times from, say, a couple hours down to a few minutes, and you can autoscale far faster and more efficiently than you can with CPU-based scaling on an ASG.
From service teams that have done the migrations, the things I hear consistently though are:
- when a Helm deploy fails, finding the reason why is a PITA (we run with --atomic so it'll rollback on a failed deploy. What failed? Was it bad code causing a pod to crash loop? Failed k8s resource create? who knows! have fun finding out!)
- they have to learn a whole new way of operating, particularly around in-the-moment scaling. A team today can go into the AWS Console at 4am during an incident and change the ASG scaling targets, but to do that with a service running in Kubernetes means making sure they have kubectl (and it's deps, for us that's aws-cli) installed and configured, AND remembering the `kubectl scale deployment X --replicas X` syntax.
The problem with bespoke, homegrown, and DIY isn't that the solutions are bad. Often, they are quite good—excellent, even, within their particular contexts and constraints. And because they're tailored and limited to your context, they can even be quite a bit simpler.
The problem is that they're custom and homegrown. Your organization alone invests in them, trains new staff in them, is responsible for debugging and fixing when they break, has to re-invest when they no longer do all the things you want. DIY frameworks ultimately end up as byzantine and labyrinthine as Kubernetes itself. The virtue of industry platforms like Kubernetes is, however complex and only half-baked they start, over time the entire industry trains on them, invests in them, refines and improves them. They benefit from a long-term economic virtuous cycle that DIY rarely if ever can. Even the longest, strongest, best-funded holdouts for bespoke languages, OSs, and frameworks—aerospace, finance, miltech—have largely come 'round to COTS first and foremost.
Personally, I don't like Helm. I think for the vast majority of usecases where all you need is some simple templating/substitution, it just introduces way more complexity and abstraction than it is worth.
I've been really happy with just using `envsubst` and environment variables to generate a manifest at deploy time. It's easy with most CI systems to "archive" the manifest, and it can then be easily read by a human or downloaded/applied manually for debugging with. Deploys are also just `cat k8s/${ENV}/deploy.yaml | envsubt > output.yaml && kubectl apply -f output.yaml`
I've also experimented with using terraform. It's actually been a good enough experience that I may go fully with terraform on a new project and see how it goes.
You might like kubernetes kustomize if you don't care for helm (IMO, just embrace helm, you can keep your charts very simple and it's straight forward). Kustomize takes a little getting used to, but it's a nice abstraction and widely used.
I cannot recommend terraform. I use it daily, and daily I wish I did not. I think Pulumi is the future. Not as battle tested, but terraform is a mountain of bugs anyway, so it can't possibly be worse.
Just one example where terraform sucks: You cannot both deploy a kubernetes cluster (say an EKS/AKS cluster) and then use kubernetes_manifest provider in a single workspace. You must do this across two separate terraform runs.
I haven’t used kubernetes in a few years, but do they have a good UI for operations? Your example of the AWS console where you can just log in and scale something in the UI but for kubernetes. We run something similar on AWS right now, during an incident we log into the account with admin access to modify something and then go back to configure that in the CDK post incident.
AWS has a UI for resources in the cluster but it relies on the IAM role you're using in the console to have configured perms in the cluster, and our AWS SSO setup prevents that from working properly (this isn't usually the case for AWS SSO users, it's a known quirk of our particular auth setup between EKS and IAM -- we'll fix it sometime).
I have to say that when you have more buy in from delivery teams and adoption of HPAs your system can become more harmonious overall. Each team can monitor and tweak their services, and many services are usually connected upstream or downstream. When more components can ebb and flow according to the compute context then the system overall ebbs and flows better. #my2cents
They might not still have the game source and/or knowledge on how to recompile the game. There are lots of technical reasons that it might be entirely unworkable to do that even if they wanted to.
(disclosure: work in video games at a big studio where I know for certain there are things we couldn't build again at this point without a ton of work)
Has that situation improved at all? I appreciate many shops still cowboy code their way across the finish line, but I would think that big publishers might put more effort into ensuring digital preservation for potential remakes/ports/whatever.
For example, it is not outrageous for me to believe that Microsoft/Sony/Nintendo would require source code/build pipelines/something for any release on their marketplace. Or do they just accept a binary from developers?
That being said, I could easily believe that even if you had the game source code, there could be many additional bespoke toolkits, widgets, 3rd party binary libraries, etc all of which have their own inscrutable compilation process.
I think microsoft gets away with only getting binaries because they have a handle on the APIs you have to use for their consoles, so they can maintain backwards compatibility with a little translation, some shims, and the go-ahead from the publisher.
Nintendo also has the advantage of other people writing emulators for their hardware that they can take advantage of years later, but only for first party stuff. (later ps2 game releases for the ps3 did something similar, with sony hiring a prominent emulator dev)
I have to assume it's either licensing issues with toolkits/middleware or apathy that stops the release process being "ship us a binary and a docker container that can build it".