for iphone it doesnt matter I would still got for discounted 13 rather than 14 as you gain mainly GPU core which changes not much. But for things like Macbooks smaller process means a lost, less energy used means less heat and less issues with throttling etc. This is big for MAcs but less so for ipads and iphones.
i mean the wafers for N4 are smaller so the yields are smaller and in the end it costs more. Also Apple doesnt do small chips or chiplets so some silicon they make is really big. Big die = smaller yields.
it should have probably also found ISBN numbers as when i did a search for Achtung Panzer i had all language versions and releases mixed in results. So it might be fine for books with single release but books that have multiple release and versions and on top of that pictures can be less fun to check :) Maybe You should also cache results for like a 1-3 days or something.
Great suggestions thanks. I'm working on a fix for ISBN numbers that will be up within a day. Caching is not something I've started on yet but it's a great idea I'll start looking into it shortly.
Anyone is using 64 cores besides Linus :) I'm much more excited for 7900x on 12 cores rather than 64 cores. But I understand the limited amount of people that needs this power on desktop can also be excited.
I could use an almost indefinite number of cores for fuzzing and compiling. Currently I have to limit my fuzzing runs to 12 cores because the 3 year old AMD machine can't handle more without impacting other development work.
But then if 64 is not useful to you, why the 7900X instead of the 7700X and it's 8 cores? The 7700X is way less power hungry and boosts to nearly the same speed as the 7900X.
Genuinely asking as I plan to replace my Ryzen 3700X with a 7700X.
im on 3900x and i was planning to upgrade to 7900x i tend to run few VMs. But I'm not sure yet. Would be cool to get on DDR5 but it feels like this time it is just to have upgrade. So not sure.
Some common dev work loads that benefit: huge builds (especially for C++ and Rust), running lots of VMs to run a copy of a cloud infrastructure locally, emulating foreign hardware for testing (qemu), large scale data analytics locally instead of paying some ridiculously expensive SaaS to do it.
I'm actually considering upgrading to 5800x or 5800x3d as a cheap temporary upgrade since the newest generation (which I initially planned on) just seems too expensive given the need for new DDR5, and new very expensive motherboards which will likely need at least another year to mature. So far I've been leaning towards the pretty cheap 5800x (280e vs 450e for 5800x3d) since the difference actually doesn't look that big for real workloads (and since it's an upgrade for a shorter than usual timeframe). Is the 5800x3d actually 60% better in non-gaming to be worth it? If not (as it seems to me) I'm not sure why waiting for the next 3d specifically makes sense.
X3D seems to shine in gaming but likely helps with other code that is not computation-heavy as well. If you don't need AVX-512 or higher memory bandwidth, either 5800 CPU is probably good. X3D is going to be a bit more future-proof, requiring a later upgrade, based on how well 5775C holds up even today.
I've seen people ask questions they can't answer on their own. Always ask self if you really want to work in a place that have this type of people in commanding positions.
Also given the saturation of the market with job offers YOU choose not they EMPLOYER. This might change in future but be aware of this current balance, you can game it.
Also even if you have infinite skill you might be not liked by random reason. That happens, bad/good days happen, interview is harder for interviewer than interviewee usually.
You are showing Your GOOD sides and he is assesing your BAD and GOOd sides.
There may a saturation of job offers but the offers aren’t not good. I’m not getting staff offers for $500k+/yr from random enterprise IT consultancy. I’d be lucky if I even got granted stock at many of these companies - let alone a $200k+ salary.
This feels very not-reusable to me. Guys has an idea that protocol buffers are the way to go. Cool but in many cases you really want to put a queue in front of things like email sending etc. And if you make email microservice that can't handle attachment it again isnt reusable.
I would be more in favour of having option to point every service at a rabbitmq queue to listen on and have a format of message that containes the delivery report queue to use when email is sent to notify rest of the system.
And here we come to conclusion, everyone has his own idea. This is who it is hard to make really good reusable micro services unless each of them becomes not so micro.
Nice that people try but i'm still in the camp of building from lego bricks provided in libs and creating my own microservices in a way I see them interconnect fit for given project. It might be protobuf, http or qeueue systems.
That's the vibe I was getting; these all feel like focused libraries, but using these in an application architecture feels like you would have a distributed monolith. You need resilient task queues to turn it into a 'proper' microservices architecture, which is honestly a much more challenging task than encapsulating concerns like sending emails or geocoding.
The other thing, as another commenter pointed out, is that these are nanoservices; I find it really difficult to imagine a situation where you would ever need 10x of one service for 1x of another, for example. It's a litmus test for microservices: do they have to scale independently.
A lot of these just interface with an external service doing the heavy lifting, so the performance requirements for each of these is low; little to no value in running them on separate, independently scalable VMs.