Well your also hardware locked. The last time I worked on a mainframe (about a decade), 8 mb of RAM was $1000, a network card was $800. Commodity hardware around the same time 8mb was probably $50 and a network card even a good one was most $100.
You're paying a premium for both the hardware and the lack of develops knowing EBSIDIC, JCL, Cobol, etc..
Actually, from what I remember of Oracle, that might be very similar. I remember having to pay license fees per core.
There's a middle ground between microservices and de-coupling. The UI can be written in JS/TS with a backend written in a more scalable faster language. It doesn't need to be all or nothing? Even if you stick to Ruby pulling some things apart due to separate concern/responsibilities is not a bad approach. Aka if you have workers/cleanup operations they don't need to live in the same code as your REST/MVC code base.
Also, it would be nice if their CI/CD could be defined in multiple .yml instead of one giant file that I seem to end up in most projects.
> Also, it would be nice if their CI/CD could be defined in multiple .yml instead of one giant file that I seem to end up in most projects.
It can. You can use `include` to include templates and job definitions from other files. Also, with child pipelines (i.e. trigger jobs), you can run a pipeline defined in a separate YAML file.
They have UI helper tools with auto-completion, it's not a complete answer but it should make life easier. The select(*) not being supported seems to be an intentional choice unfortunately.
to be fair, a kernel dev is so far down his rabbit hole that web services and web tech is not something that they're particularly interested in. They also know intricate details on what my kernel does that I have no hope of ever understand or desire to. They exceed at what they're passionate about.
The first interaction I had with GraphQL the developer who introduced tried to make it TOO smart for its own good. It was building complex SQL dynamically which means the SQL that it was running was borderline non-deterministic.
(Aka. identified the foreign keys and primary keys linked them together and made all other fields options)
Things I don't like about GraphQL (In Java which was my experience)
1. Debugging it is annoying. There's no clear concise way to follow the code there seems to be some magic that happens where it isn't clear when certain parts of the code get invoked.
Things I like about it.
1. Lets you make multiple queries and reduce/extend the size of your payload as needed.
In my view graphQL is much better when you're not constrained by a SQL like backend. It's great at filtering the payloads which is great for mobile and such. It also allows you make multiple calls in one go which also means you can shoot yourself in the foot if you over do it.
I will call out that some of this is trauma based from my last experience. Having a more dynamic language that isn't Java may make the experience better but in general everytime we had to update the GraphQL code it was cringy.
Eventually started to gut the dynamic SQL and replacing it with a simple Query and then used GraphQL and trim the response which worked out much better.
Generally the question to ask is how many iterations of an endpoint do you need and is it worth introducing a new technology vs just having a few query parameters to do some filtering.
That being said, I'm now looking at some Query language to work with Neo4J so I'm back at looking at dynamic APIs. (:
> Lets you make multiple queries and reduce/extend the size of your payload as needed.
While this is cool, in theory, I haven't found it to be in practice. If it's an internal API, you can just provide a way for the client to get exactly what they need in a single query. If it's an external API, you have to deal with putting limits in place to keep users from burdening the system with complicated requests. Limits can become complex very quickly.
I am also considering woriking with Neo4j for a project.
I am wondering why not build on top of their Http Api? You can send multiple Cypher statements over Http and correct me if I am totally wrong, but "stealing" the concept of graphql's persistent queries, you could make use of http caching also.
Just to make the conversation easier the use case I have is a network topology so you have things like routers, switches, ports etc. If you take it all the way up to Layer 7 (Application) you can have say web services etc. So you could in theory say link X was cut what is affected?
Anyways... my basic POC was exposing endpoints so I can do things like.
to get a list of all devices but I'm basically just writing custom Cipher code to do that query and the benefits of Neo4J basically go out the door. It still has some interesting Graph features but if all i'm doing is writing a custom endpoint for every use case it's mainly pointless.
You can do a simple POST statement which takes a Neo4J query and execute it with some caching on top of it for sure.
Either ways in order to make Neo4J worth it I need a way to make the queries more dynamic. So right now I'm thinking of:
Or just having a dumb POST /custom/query that maybe only supports read operations. You can add a layer of auth but I'm not a big fan of just having some endpoint that's basically a Pipe to Neo4j. It feels just as bad as saying, typing any SQL here and we'll execute it on the server. If people know what they're doing that's fine..but at that point just setup phpMyAdmin/ pgAdmin. At that point you're trusting folks to know what they're doing and if folks accidently drop Bobby Tables (https://xkcd.com/327/) then it's an accepted risk.
If you just have a proxy to run any Cipher query, you might as well just provide users on Neo4J web instance and let them play there.
Anyways, still in early stages trying to figure out how to best leverage Neo4J
You could give a POST /custom/query with only read capabilities.
Also you can give some custom "helper" operations along.
And certainly you could give them some option to prettify the response.
Because one thing is to query the correct things, another whole thing is how to structure the response according to your needs.
Yeah this was years ago but we had concept of 'hydrated' objects so you'd pass a flag to get back shallow object or the hydrated version that had all the relationship loaded as well.
Some flags for helpers that fetch additional data would be good.
I was about to say. Cause they can't figure out IPV6 either. Their K8 and VPC networking is still not IPv6 complaint last I checked. If you're bored you can get this [extention](https://chrome.google.com/webstore/detail/ipvfoo/ecanpcehffn...) and see just how much of the internet is actually IPv6
"GitHub Actions has become the #1 CI service, used by popular open source projects and enterprises alike."
I must be missing something. For whatever reason GH Actions just never appealed to me. Am I missing something? I've used Drone IO more, granted it's better than travis but the #1 CI services seems like a stretch.
It's just extremely low friction. Push some YAML in an existing repo, done. I've enjoyed using TeamCity in the past, and tolerated Hudson/Jenkins, and I do keep expecting to hit something that makes me want to go back, but it hasn't happened yet.
Another way to put it: any CI properly integrated to Github will be #1 CI by sheer market size.
Gitlab has a similar YAML system from more than a decade, it's low friction, pretty polished and highly reliable. But it will always be a more niche product by the effect of Gitlab being way smaller that Github.
Its free for open source projects and integrated nicely into Github. Even if another CI service is better, the bar to get started with Actions is much lower.
i assume he means #1 in usage (measurable) rather than in quality (an opinion, which he may also have, but which he probably wouldn't say as simply as "has become the #1").
When you say "seems like a stretch", I read it as you thinking he meant "quality, as a matter of opinion".
I would not be surprised if it's #1 in usage, getting there by being integrated in github and free and actually pretty darn good.
I'm honestly surprised by quantity even. I'd question the quality from my brief experience with it I didn't find it that impressive.
The quantity aspect is also a bit surprising since it hasn't been that long that it's been released, so seeing a #1 leader is impressive in itself, even if it's free and integrated.
I mean I have a bunch of TravisCI projects that I really don't want to migrate over unless I have a good reason. The "if it works" leave it alone.
If there's enough motivation for people to switch it's impressive (or it shows how much people hate travis/jenkins/whatever CI they ARE using)
I use it at work, I hate it. There's a senior level guy that keeps pushing for it, gets it barely working then leaves our CI/CD pipeline broken and I have to go and clean it up. I'm working on moving our team away from Github Actions now.
You're paying a premium for both the hardware and the lack of develops knowing EBSIDIC, JCL, Cobol, etc..
Actually, from what I remember of Oracle, that might be very similar. I remember having to pay license fees per core.