Hacker News new | past | comments | ask | show | jobs | submit | Pirate-of-SV's comments login

120 auto mechanics employed in the 9 Tesla service centers are affected.


It's because you're using Cloudflare DNS (1.1.1.1) which Internet Archive return an incorrect IP for. There's a bazillion threads about this on HN for more detail


Thanks, that was it!


Github Markdown supports Mermaid OOTB, really useful.


Any domain on the public suffix list should just be ignored I suppose. https://publicsuffix.org/list/


It'd be nice as an extra precaution, but please don't build things that rely on the Public Suffix List for security (this list by its nature is only a laggy incomplete approximation of the actual use of domains).


Writing parsers and tokenizers probably.


That's what it's best for, but personally I use it for everything. If I ever get into low-level code I'll probably use Rust though.

You can confirm that parsers/tokenizers is ranked "best in class" here though:

https://github.com/Gabriella439/post-rfc/blob/main/sotu.md


Very good! I use this hack every day in winter to heat my apartment (charging laptop at work, run git brute force at home).


If only one node is being drained then yes.

If you want to drain 3 nodes as fast as possible it's best to start by cordoning all 3 of them and after that run drain on each respectively. This will cause minimal unnecessary interruption of pods. What we want to avoid is that new pods start on nodes that we soon want to remove/drain.


I think it's problematic that most security features (configurable by the user) are switched off by default and require some effort to set up. This is getting better though and for S3 buckets specifically a lot of additional features have showed up lately to "solve" this.


Can you give an example? Everything I can think of is either default-deny or minimally scoped, and "system" IAM roles tend to just not exist until you need them for the first time.


Ah yeah for IAM the defaults seem sane, true! The things that I've come across recently is that encryption at rest is usually disabled ( EBS, SQS, S3, SNS, Kinesis, Cloudwatch log groups, Cloudtrail). Some of these have server side encryption and then it's easy to just check that box, other require you to set up KMS keys etc.

Audit logs are in many cases disabled by default (RDS, S3, OpenSearch, ELB).

S3 does not require TLS requests by default. ECR does not have image scanning enabled by default.

Also new accounts almost all regions enabled and a default VPC in each (and subnets, route tables, security groups, internet gateway, dhcp option set). Unused VPCs are not recommended to keep around but I suppose it makes onboarding easier.


Good points. Totally agree about encryption - I think S3 is a legacy case where SSE-S3 is implemented differently to SSE-KMS, but still I'd be on board with KMS encryption (using an AWS managed key) as the default.

Audit logging costs money, so I'm on the fence about that.

A default VPC is easy to disable in enterprise deployments, but for the rest of us it is necessary to do quick tests with EC2-adjacent services - I'd be in favour of it not existing until you try to launch something though.


I think SQS encryption was enabled by default (for newly created queues) this week funnily enough.

I completely agreed though - I look at CloudConformity and so many of the warnings are for encrypted resources.


Not only a record, the record needs to point to a ipv6 also. If that is missing it is not as trivial as you make it.


An ipv6 address shouldn't be a problem either.


Nice, one step closer to AWS. Next up I'd like to see an increase to the 30 second termination notice. 120 seconds would be good enough :)


Just clone AWS down to the API endpoints?


They already replicated the S3 API as an alternative to the Google Cloud Storage API: https://cloud.google.com/storage/docs/migrating#migration-si...


I get the impression that's pretty common, Cloudflare's new product is S3 compatible too.


Yeah, S3 is very common. Google Cloud Storage interestingly did not use the S3 API, but then added a limited compatibility layer specifically for "Migrating from Amazon S3".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: