Hacker Newsnew | past | comments | ask | show | jobs | submit | blurker's commentslogin

That sounds amazing. I aspire to get a setup like yours. I am on a Pixel with the stock OS and I can't stand the way Google is pushing AI into everything on my phone.

I haven't switched it to Graphene OS yet because I read that there are issues with NFC and a few other things. I assume this new phone won't have those problems so I think that will be my catalyst to do a big overhaul.


This depends what you mean by 'issues with NFC'. My understanding is that Google require an OS that is blessed by them for contactless payments in Google Wallet to work. That restriction applies to all alternative operating systems that aren't Google certified stock Android.

The OEM partnership would not change that.

In non-NA regions there may be more options for mobile contactless payments using apps that are not Google Wallet/Pay. So it also depends where in the world you are.


I doubt contactless payments will ever work on Graphene. In any case, I don't find carrying a credit card particularly inconvenient. I prefer cash for small transactions too; it's the only means of payment that is truly anonymous.


I've seen some pretty good React codebases and I've seen plenty of backend spaghetti code. In all cases it's not the tools, it's the programmers and usually it's layers and layers of people not taking the time to write clean code. Probably because their management doesn't value it or they don't have someone with the experience necessary to guide them towards clean code.


I disagree. With classes, everything was stateful (because, y'know, classes). People were doing all sorts of crazy thing with the lifecycle methods and it was always a pain to have to remember the "this scope" and bind your event handlers. I saw so many bugs written by people who lost track of what "this" was.

Both paradigms have foot guns but having used both I much prefer the hook version.


> most great engineers will actively work with management to make sure they're not single points of failure!

Sure, but that is a load bearing "great" for sure. Not every company is staffed with great, selfless engineers.

I'm an engineer and I've worked at companies with engineers who actively resisted making themselves not a single point of failure because it gave them control and job security. I think it's not uncommon to have these types at companies and it really sucks when they have their management Stockholm syndromed because they make it hard for all the other "great" engineers to do their jobs.


The company not being able to run without you doesn't mean you have job security, it just makes the company hurt more when they fire you based on someone's spreadsheet.


Tell that to the gatekeeping engineers who think otherwise.


Good managers will recognize that and know what those kinds of engineers are doing. Every company should have at least some good managers - seek them out, it's worth it. If you can't find one in current company, try to switch company - again, it's worth it IMO.


This comment really nicely captures how I feel about this. There's something to be said about good faith and knowing what the spirit of the agreement is.

There are some comments here saying stuff like "these compliance forms are ridiculous and are often just bureaucratic nonsense" and you see comments advocating for playing dumb and answering in bad faith and there you go.

I see there being a bit of an attitude of "everyone is doing it" to justify also doing it just to compete because you're at a disadvantage if you don't. And that's not entirely wrong but it sucks and I personally will avoid competing in that way. Probably that means not much sales in my career. Or science, but that's another topic...


Yeah I came away feeling like this was clickbait. Based on the title I expected to read something about the app stores quietly injecting telemetry in your extension or something like that. Something outside of the developer's control or being done quietly by default as part of the standard packaging and delivery pipeline.

What the author described was very much not that. What they described was developers making a conscious decision to add untrusted code to their extension without properly verifying it or following security best practices.

A more accurate title would be something like "It's hard to trust browser extensions, developers are bombarded with offers of easy money and may negligently add malware/adware"


Ahh as someone who has built several scraping applications, I feel their pain. It's a constant battle to keep your scraper working.


Bloody shame, OpsWorks was a great service in my experience. I built a few clusters with it before Kubernetes and terraform were a thing.

That said, I heard from folks at AWS that it was not well maintained and a bit of a mess behind the scenes. I can't say I'm surprised it's being shut down given where the technology landscape has shifted since the service was originally offered.

RIP OpsWorks.


OpsWorks was based on a really old fork of the Chef code. I did quite a bit of Chef in my day, but it really only made sense in a physical hardware/VMware virtual instance kind of environment, where you had these "pets" that you needed to keep configured the right way.

Once you got up to the levels of AWS CAFO-style "cattle" instances, it stopped making so much sense. With autoscaling, you need your configuration to be baked into the AMI before it boots, otherwise you're going to be in a world of hurt as you try to autoscale to keep up with the load but then you spend the first thirty minutes of the instance lifetime doing all the configuration after the autoscale event.

A wise Chef once told me that "auto scaling before configuration equals a sad panda", or something to that effect.

Chef did try to come up with a software solution that would work better in an AWS Lambda/Kubernetes style environment, and I was involved with that community for a while, but I don't know what ever became of that. I probably haven't logged into those Slack channels since 2017.

IMO, there are much better tools for managing your systems on AWS. CDK FTW!


> Back when I was a junior developer, there was a smoke test in our pipeline that never passed. I recall asking, “Why is this test failing?” The Senior Developer I was pairing with answered, “Ohhh, that one, yeah it hardly ever passes.” From that moment on, every time I saw a CI failure, I wondered: “Is this a flaky test, or a genuine failure?”

This is a really key insight. It erodes trust in the entire test suite and will lead to false negatives. If I couldn't get the time budget to fix the test, I'd delete it. I think a flaky test is worse than nothing.


"Normalisation of Deviance" is a concept that will change the way you look at the world once you learn to recognise it. It's made famous by Richard Feynman's report about the Challenger disaster, where he said that NASA management had started accepting recurring mission-critical failures as normal issues and ignored them.

My favourite one is: Pick a server or a piece of enterprise software and go take a look at its logs. If it's doing anything interesting at all, it'll be full of errors. There's a decent chance that those errors are being ignored by everyone responsible for the system, because they're "the usual errors".

I've seen this go as far as cluster nodes crashing multiple times per day and rebooting over and over, causing mass fail-over events of services. That was written up as "the system is usually this slow", in the sense of "there is nothing we can do about it."

It's not slow! It's broken!


Oof, yes. I used to be an SRE at Google, with oncall responsibility for dozens of servers maintained by a dozen or so dev teams.

Trying to track down issues with requests that crossed or interacted with 10-15 services, when _all_ those services had logs full of 'normal' errors (that the devs had learned to ignore) was...pretty brutal. I don't know how many hours I wasted chasing red herrings while debugging ongoing prod issues.


we're using AWS X-ray for this purpose, i.e. a service is always passing on and logging the X-ray identifier generated at first entry into the system. pretty helpful for this purpose. And yes, there should be consistent log handling / monitoring. Depending on service we differ between error log level (=expected user errors) and critical error level (makes our monitor go red).


It often isn't as simple as using a correlation identifier and looking at logs across the service infrastructure. If you have a misconfiguration or hardware issue it very likely may be intermittent and only visible as an error in a log before or after the request. The response has incorrect data inside a properly formatted envelope.


I guess that's one of the advantages of serverless - by definition there can be no unrelated error in the state beyond the request (because there is none), except for the infrastructure definition itself. But a misconfig there you'll always see in form of an error happening at calling the particular resource - at least I haven't seen anything else yet.


That's assuming your "serverless" runtime is actually the problem.


You don't even have to go as far from your desk as a remote server to see this happening, or open a log file.

The whole concept of addressing issues on your computer by rebooting it is 'normalization of deviance', and yet IT people in support will rant and rave about how it's the fault of users for not rebooting their systems whenever they get complaints of performance problems or instability from users with high uptimes— as if it's not the IT department itself which has loaded that user's computer to the gills with software that's full of memory leaks, litters the disk with files, etc.


I agree with what you're saying, but this is a bad example:

> Pick a server or a piece of enterprise software and go take a look at its logs. If it's doing anything interesting at all, it'll be full of errors.

It's true, but IME those "errors" are mostly worth ignoring. Developers, in general, are really bad at logging, and so most logs are full of useless noise. Doubly so for most "enterprise software".

The trouble is context. Eg: "malformed email address" is indeed an error that prevents the email process from sending a message, so it's common that someone will put in a log.Error() call for that. In many cases though, that's just a user problem. The system operator isn't going to and in fact can't address it. "Email server unreachable" on the other hand is definitely an error the operator should care about.

I still haven't actually done it yet, but someday I want to rename that call to log.PageEntireDevTeamAt3AM() and see what happens to log quality..


> The trouble is context. Eg: "malformed email address" is indeed an error that prevents the email process from sending a message

I’m sure you didn’t quite mean it as literal as I’m going to take it and I’m sorry for that. Any process that gets as far as attempting to send an email to something that isn’t a valid e-mail address is, however, an issue that should not be ignored in my opinion.

If your e-mail sending process can’t expect valid input then it should validate its input and not cause an error. Of course this is caused by saving invalid e-mail addresses as e-mail addresses in the first place which in it self shows that you’re in trouble, because that means you have to validate everything everywhere because you can’t trust anything. And so on. I’m obviously not disagreeing with your premise. It’s easy to imagine why it would happen and also why it would in fact end up in the “error.log”, but it’s really not an ignorable issue. Or it can be, and it likely is in a lot of places but that’s exactly GPS point isn’t it? That a culture which allows that will eventually cause the spaceship to crash.

I think we as a society are far too cool with IT errors in general. I recently went to an appointment where they had some digital parking system where you’d enter your license plate. Only the system was down and the receptionist was like “don’t worry, when the system is down they can’t hand out tickets”. Which is all well and good unless you’re damaged by working in digitalisation and can’t help but do the mental math on just how much money that is costing the parking service. It’s not just the system that’s down, it’s also the entire fleet of parking patrol people who have to sit around and wait for it to get to work. It’s the support phones being hammered and so on. And we just collectively shrug it off because that’s just how IT works “teehee”. I realise this example is probably not the best, considering it’s parking services, but it’s like that everywhere isn’t it?


Attempting to send an email is one of the better ways to see if it's actually valid ;)

Last time I tried to order pizza online for pickup, the website required my email address (I guess cash isn't enough payment and they need an ad destination), but I physically couldn't give them my money because the site had one of those broken email regexes.


I disagree about extensive validating of email addresses. This is why: https://davidcel.is/articles/stop-validating-email-addresses...


The article you link ends by agreeing with what I said. So I’m not exactly sure what to take it as. If your service fails because it’s trying to create and send an email to an invalid email, then you have an issue. That is not to say that you need excessive validation, but in most email libraries I’ve ever used or build you’re going to get runtime errors if you can’t provide something that looks like x@x.x which is what you want to avoid.

I guess it’s because I’m using the wrong words? English isn’t my first language, but what I mean isn’t that the email actually needs to work just that it needs to have something that is an email format.


> Developers, in general, are really bad at logging, and so most logs are full of useless noise.

Well, most logging systems do have different log priority levels.

https://manpages.debian.org/bookworm/manpages-dev/syslog.3.e...

LOG_CRIT and LOG_ALERT are two separate levels of "this is a real problem that needs to be addressed immediately", over just the LOG_ERR "I wasn't expecting that" or LOG_WARNING "Huh, that looks sus".

Most log viewers can filter by severity, but also, the logging systems can be set to only actually output logs of a certain severity. e.g. with setlogmask(3)

https://manpages.debian.org/bookworm/manpages-dev/setlogmask...

If you can get devs to log with the right severities, ideally based on some kind of "what action needs to be taken in response to this log message" metric, logs can be a lot more useful. (Most log messages should probably be tagged as LOG_WARNING or LOG_NOTICE, and should probably not even be emitted by default in prod.)

> someday I want to rename that call to log.PageEntireDevTeamAt3AM()

Yup, that's what LOG_CRIT and above is for :-)


In my experience, the problem usually is that severity is context sensitive. For example, a external service temporarily returning a few HTTP 500 might not be a significant problem (you should basically expect all webservices to do so occasionally), whereas it consistently returning it over a longer duration can definitely be a problem.


That is exacly what previous commenter meant - developers a bad at setting correct serverity for logs.

This becomes even a bigger proglem in huge organizations where each team has own rules so consistency vanishes.


> I still haven't actually done it yet, but someday I want to rename that call to log.PageEntireDevTeamAt3AM() and see what happens to log quality..

The second best thing (after adding metrics collection) we did as a dev team was forcing our way into the on-call rotation for our application. Now instead of grumpy sysops telling us how bad our application was (because they had to get up in the night to restart services and what not) but not giving us any clue to go on to fix the problems, we could do triage as the issues where occurring and actually fix the issues. Now with mandate from our manager because those on-call hours where coming from our budget. We went from multiple on-call issues a week to me gladly taking weeks of on-call rotation at a time because I knew nothing bad was gonna happen. Unless netops did a patch round for their equipment which they always seem to forget to tell us about.


  I want to rename that call to log.PageEntireDevTeamAt3AM() and see what
  happens to log quality
I managed to page the entire management team after hours at megacorp. After spending ~7 months being tasked with relying on some consistently flakey services I'd opened a P0 issue on a development environment. At the time I tried to be as contrite as possible, but in hindsight what a colossal configuration error. My manager swore up and down he never caught flack for it, but he also knew I had one foot out the door.


> Developers, in general, are really bad at logging

That's not the problem. I'll regularly see errors such as:

    Connection to "http://maliciouscommandandcontrol.ru" failed. Retrying...
Just... noise, right? Best ignore it. The users haven't complained and my boss said I have other priorities right now...


> my boss said I have other priorities right now

Way to bury the lede...


Horrors from enterprise - few weeks ago a solution architect forced me to rollback a fix (a basic null check) that they "couldn't test" because its not a "real world" scenario (testers creating incorrect data would crash business process for everyone)...


Your system could also retry the flaky tests. If it fails after 3 or 5 runs, it's for sure a defect.


This is the power of GitHub actions where each workflow is one YAML file.

If you have flaky tests, you can isolate them to their own workflow, and deal with it as isolated away from the rest of your CI process.

Does wonders around this. The idea of monolithic CI job is backward to me now


> They won't exceed speed limits.

Uhh, about that...

> Tesla allows self-driving cars to break speed limit, again

- https://www.theguardian.com/technology/2017/jan/16/tesla-all...


Because going the speed limit is unsafe in certain scenarios, because other human drivers are expecting people to exceed the limit like they do.

It's the same with Tesla wanting to allow rolling stops at stop signs – not the letter of the law, but it's what humans expect. In many instances you are more likely to cause a minor accident (being rear-ended) by coming to a complete stop at a stop sign than the risk of causing and accident by rolling through an intersection that clearly has nobody else in it.

Won't be a problem once most cars on the road are driverless.


I feel like I should be able to file a class action lawsuit against Tesla if the cars are programmed to speed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: