Something must be preventing them updating the status page at this point. Of course they could still deem it not enough, but just from my limited tests, docker, buf, etc (it may not be GCP that is down, but it is quite the coincidence). are outright down. I'd wager that this is much more widespread.
I'm actually on a bridge call with Google Cloud, we're a large customer -- I just learned today that their status page is not automated, instead someone actually manually updates it!
That's the case with every status page. These pages are managed by business people not engineers, because their primary purpose is to show customers that the company is meeting contractually defied SLAs.
Maybe or maybe not, but someone with nothing better to do than monitor that page out of boredom might “get on the horn” with lots of people to complain if a green check mark turns to a red X.
They aren't automatically based on that page, but seeing a red status makes it too easy for customers to point to it and go "see you were down, give us a refund".
The best way to consistently having good "time to response" metrics, is to be the one deciding when an incident "actually" started happening, if at all :)
I am by no means a C++ expert, a noob rather. It might be possible to make this generic, but it seems quite easy to forget a small detail here, and then be kneecapped because you forgot one of the overloads. So that in very few amount of edge cases your abstraction isn't actually cleaned up.
Better than nothing, and might be the most preferred way of doing things in C++, but it does seem dangerous to me. ;)
I used to program a lot in C++ but switched to a number of different programming languages since then. Everything in C++ is this way and it’s hard to understand that things don’t have to be this way when you’re in the trenches of C++.
I distinctly remember what a breath of fresh air it was to switch to Java and then later C# where in both languages an "int" is the only 32-bit signed integer type, instead of hundreds of redefinitions for each library like ZLIB_INT32 or whatever.
"Native" types like usize and isize as used in Rust I'm totally fine with.
What I got frustrated with in C/C++ is the insanity of each and every third-party library redefining every type. I understand the history and reasoning behind this, but it's one of those things that ought to have been fixed in the early 2000s, not decades later when it's too late.
I think by C++17 (14 even), it was in a very good state and since then, third party libraries are better and better. So really, a decade and a half later than you think it should have been.
C++ is a difficult language to use well but it becomes a lot easier when you turn on the 'correct' warnings. Turning on the `-Wdeprecated-copy-dtor` warning would help you not forget one of the cases in the rule of 3.
I would have loved it if this warning was on by default but sadly, you have to know to turn it on.
As far as I know, many experienced C++ programmers have a list of warnings they use for their projects and it varies by industry since different compilers will have different flags. When I was working on bigger C++ projects, one of the senior programmers would configure the warnings for everyone.
An easy start is to use `-Wall -Wextra -Wpedantic` for gcc and clang compilers. I would now add `-Wdeprecated-copy-dtor` to the list but I only learned about it writing this article.
It is possible, but it isn't hard to remember the rules. The scope of where you can mess up is isolated to the single class so it is generally possible to get this right. Manual memory management is easy when the new and delete are near each other, but you often need them far removed. RAII ensures the places you can mess up are close.
A static analisys tool can enforce the rule as well- you should have one.
unfortunately though this is one area that because c++ predates RAII it can't changethe defaults without breaking a lot of code. I am saying the problem is manageable - there is a real problem though. If you make a new language learn from this mistake.
RAII was something that C++ incidentally made possible. Nobody realized the power until after C++ existed. It was more of a "look at the cool thing we can do" when C++ was invented, only after did people realize just how great/powerful it is.
kind of both as I understand it. Destructors was something C++ was going for, but the full power doesn't seem to be something that was really understood for a long time after they existed. At least not in C++ as I remember it in those days. Maybe Bjarne has a better vision that my professors didn't share though.
They certainly didn't, because using destructors for RAII was already something I learnt in Turbo C++ 1.0 for MS-DOS manuals, back in 1993.
And by the time of Windows 3.x with OWL, Apple AppFramework, OS/2 CSet++, Motif++, they were used all over the place.
Easy to check those surviving manuals.
Also in 1995, the C++ lecture material at the university, back when everyone was implementing their own personal standard library, already discussed RAII design.
To note that even now there are plenty of universities that fail on their approach on how to teach C++, hence the Kate Gregory's talk aptly named "Stop teaching C" in the context of teaching C++.
To be honest this is a little sad for me. I'd hoped that Neon would be able to fill the vacuum left by CockroachDB going "business source"
Being bought by DataBricks makes Neon far less interesting to me. I simply don't trust such a large organisation that has previously had issues acquiring companies, to really care about what is pretty much the most important infrastructure I've got.
There certainly is enough demand for a more "modern" postgresql, but pretty much all of the direct alternatives are straying far from its roots. Whether it be pricing, compatibility, source available etc.
Back when I was looking at alternatives to postgres these were considered:
1. AWS RDS: We were already on AWS RDS, but it is expensive, and has scaling and operations issues
2. AWS Aurora: The one that ended up being recommended, solved some operations issues, but came with other niche downsides. Pretty much the same downsides as other wire compatible postgresql alternatives
3. CockroachDB: Was very interesting, wire compatible, but had deeper compatibility issues, was open source at the time, it didn't fit with our tooling
4. Neon: Was considered to be too immature at the time, but certainly interesting, looked to be able to solve most of our challenges, maybe except for some of the operations problems with postgresql, I didn't look deeper into it at the time
5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
There are also various self hosting utilities for PostgreSQL I looked at, specifically CloudPG, but we didn't have the resources to maintain a stateful deployment of kubernetes and postgres ourselves. It would fulfill most of our requirements, but with extra maintenance burden, both for Kubernetes and PostgreSQL.
Hosting PostgreSQL by itself, didn't have mature enough replication and operations features by itself at that point. It is steadily maturing, but as we'd got many databases manual upgrades and patches would be very time consuming, as PostgreSQL has some not so nice upgrade quirks. You basically have to unload and reload all data during major upgrades. Unless you use extensions and other services to circumvent this issue.
> 5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
In my brief experience as an engineer (2014->), I've learned that the best "modern" alternative to PostgreSQL at year X has been PostgreSQL at year X+5. :)
Mainly in relation to notify/listen and advisory locks. Most of our code bases use advisory lock based migration tools. It would be a large lift moving to an alternative or building a migration scheduler out of process
I feel the pain, I had so much trouble setting up QMK to my handwired keyboard. I had to give up on VIA and VIAL and just config my keys in text.
It basically took the same amount of time 3d printing my case, soldering it (Dactyl Manuform) etc. As it did configuring it. Massive pain, and quite unclear documentation on how to actually set up a handwired keyboard. Next time will be smoother, and I probably should document what I did ;) I honestly, felt like a rookie programmer over again, good fun, but super challenging.
I am biased because I built the rust SDK for dagger. But I think it is a real step forward for CI. Is it perfect? Nope. But it allows fixing a lot of the shortcomings the author has.
Pros:
- pipeline as code, write it as golang, python, typescript or a mix of thr above.
- Really fast once cached
- Use your languages library for code sharing, versioning and testing
- Runs everywhere local, ci etc. Easy to change from github actions to something else.
Cons:
- Slow on the first run. Lots of pulling of docker images
- The DSL and modules can feel foreign initially.
- Modules are definitely a framework, I prefer just building having a binary I can ship (which is why the rust SDK doesnt support modules yet).
- Doesn't handle large mono repos well, it relies heavily on caching and currently runs on a single node. It can work if you don't have 100 of services especially if the builder is a large machine.
Just the fact that you can actually write ci pipelines that can be tested, packaged, versioned etc. Allows us to ship our pipelines as products which is quite nice and something we've come to rely on heavily
I'm genuinely intrigued by Dagger, but also super confused. For example, this feels like extra complexity around a simple shell command, and I'm trying to grok why the complexity is worth it:
https://docs.dagger.io/quickstart/test/#inspect-the-dagger-f...
I'm a fanboy of Rust, Containerization, and everything-as-code, so on paper Dagger and your Rust SDK seems like it's made for me. But when I read the examples... I dunno, I just don't get it.
It is a perfectly valid crtitisim dagger is not a full build system that dictates what your artifacts look like. Unlike maybe something like bazel or nix. I think of dagger as a sort of interface that now allows me to test and package my build and ci into smaller logical bits and rely on the community for parts of it as well.
In the end you do end up slinging apt install commands for example, but you can test those parts in isolation. Does my ci actually scan this kind of vulnerability, install postgres driver, when I build a rust binary is it musl and working on scratch images.
In some sense dagger feels a little but like a programmatic wrapper on top of docker, because that is actually quite close to what it is.
You can also use it for other things because in my mind it is the easiest way of orchestrating containers. For example running renovate over a list of repositories, spawning adhoc llm containers (pre ollama), etc. Lots of nice uses outside of ci as well even if it is the major selling point
That conceptually allows one to have two different "libraries" in your CI: one in literal golang, as a function which takes in a source and sets up common step configurations, and the other as Dagger Functions via Modules (<https://docs.dagger.io/features/modules> or <https://docs.dagger.io/api/custom-functions#initialize-a-dag...>) which work much closer to GHA uses: blocks from a organization/repo@tag style setup with the grave difference that they can run locally or in CI, which for damn sure is not true of GHA uses: blocks
The closest analogy I have is from GitLab CI (since AFAIK GHA does not allow yaml anchors nor "logical extension"):
.common: &common # <-- use whichever style your team prefers
image: node:21-slim
cache: {} # ...
environment: [] # ...
my-job1:
stage: test
<<: *common
script: [ npm, run, test:unit, run ]
my-job2:
stage: something-else
extends: .common
script: echo "do something else"
1: I'm aware that I am using golang multiple times in this comment, and for sure am a static typing fanboi but as the docs show they allow other languages, too
Exactly. When comparing dagger with a lot of the other formats for ci, it may seem more logical. Ive spent so much time debugging github actions, waiting 10 minutes for testing a full pipeline after a change etc. Over an over again. Dagger has a weird DSL in the programming languages as well but at least it is actual code that I can write for loops around or give parameters and reuse. The instead of a groovy file for Jenkins ;)
I've been using a mix of delta and difftastic both are amazing. Difftastic especially for tree-sitter AST syntaxes, it is a bit slower, but AST aware diff is so nice.
I'd like to use ipv6, if only to avoid having to pay for an ipv4 address for some private vpcs (with public address for reasons). I remember having issues with fly.io as well, because they're ipv6 by default if I remember correctly.
Currently Denmark has worse support than I expected:
> Liste over danske udbydere (List of Danish providers)
> Internetudbydere på listen: 41 (ISPs on the list)
> Internetudbydere med fuld IPv6-understøttelse: 17 (41%) (ISPs with full IPv6)
> Internetudbydere med delvis IPv6-understøttelse: 10 (24%) (ISPs with partial IPv6)
> Internetudbydere uden IPv6-understøttelse: 14 (34%) (ISPs with no IPv6)
Nickel lang is such an effort. Id say the syntax is a mix of json and lua and aims for a non-touring complete program. It is still a bit early but it looks promising
No, Nickel is Turing-complete. That's been one of the characteristics intended to distinguish it from most other configuration languages from the start.
Charm/bubbletea toolstack, is very much focused on an ELM style architecture, I.e. message passing and deriving UI from state. You can still do immediate type UI if that is what you prefer, but it won't fit with any of the standard components. Bubbletea is a framework more or less, so even if you know Go, it will require you to learn how to build to its strengths.
Ratatui is by default not very opinionated about how to handle state and updates, which requires some development from you, or using a third party library to get an opinionated architecture around it. Ratatui is more of a collection of libraries, out of the box, it expects a certain interface for components, but it is up to you how you want to compose them together. Whether that is immediate-, stateful-, react-, or ELM style. Stateful is the default for all their examples.
Charms way of doing terminal UI is very much based around strings, which can sometimes give issues with spacing, as components can be fiddly to be constrained within a certain space.
Ratatui creates UI on top of a matrix of bytes, which makes it more difficult to do easy things, but allows you to more easily build complex uis.
Generally I prefer Ratatui, as you can really build robust and fast uis on top of it. It does take a bit more work to get started though. I am also biased by Rust tho.
Ratatui maintainer here. I'd agree with all of the above points.
The lack of opinionated approach stems from Ratatui being not a framework, but a library (you call us, we don't call you), and not having any of the event/input handling things included.
There's likely room for a framework or two on top of it.