You are right but it misses the flavor of the problem. I was a consultant in infosec to F500s for many years. Often solving a problem involves simply knowing the right person that has already thought about it or toiled on the problem or a similar one. But when there are 100,000 engineers it becomes an order of magnitude (or two!) more difficult and that puts forth unique challenges. You can still call them “people problems” and they often may be. However if you try to solve them the same way you might solve it at a smaller engineering org you will get and be nowhere and be poorer for the time you spent trying it. Ask me how I know lol. The technical problems are also like that. Almost everything has an analog or similar thing to what you are probably familiar with but it is scaled out, has a lot of unfamiliar edges and is often just different enough that you have to adjust your reasoning model. Things you can just do at even a typical f500 you can’t just do at big tech scale. Anyway, you are directionally correct and many of these wounds are self inflicted. But running a company like Google or Facebook is ridiculously hard and there are no easy answers, we just do our best.
Well, there are ~2.4 million people in Chicago. In the last 12 months there were ~2600 shootings. In a given week, ignoring all other factors your odds of being shot are 1 in 48000. As a visitor you are unlikely to be in higher crime areas where shootings are more likely as well. In short, no, it is probably not something to overly worry about.
Absolutely will be in business. The structural advantage and trajectory is tremendous. It is treated like an enterprise organization and ran very different from traditional Google business. Products are supported and customers have significant influence in the business. (Disclaimer: I work in the Cloud @ Google).
I think they will be in business, but don't pretend products will be supported. Wouldn't surprise me if Datastore / App Engine go away in the future. The number of APIs I have had to migrate over the past 5 years using GCP is shameful:
- could no longer do code deploys on internal WebApps that were on old Python 2/3 images. Had to stick with that revision until we migrated to newer versions.
- had to migrate to a newer firebase messaging API recently
- search features mostly killed on GCP, which is ironic. I guess we're supposed to build our own search or use a 3rd party? Postgres/Spanner full text search it is then.
Looking forward to having to migrate my gen1 cloud functions too! Keeps me employed I suppose.
It likely no longer matters. Not in the sense that AI replaces programmers and engineers, but it is a fact of life. Like GPS replacing paper navigation skills.
I grew up never needing paper maps.
Once I got my license, GPS was ubiquitous.
Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
>Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
The amount of C++ written at my company every day is… a lot. We are slowly fighting away from it towards memory safety, but it is hard. It will take a decade.
At the company I currently work for we also use C++, and I am quite proficient in it. But the amount of times I have slowed myself down by simple lifetime issues makes me want to switch to a more memory safe language. Whether that is C++ with profiles or a whole new language such as Rust.
For example, a week back I lost a few hours finding a segfault bug in C++ code, which ended up being a trivial lifetime error: I used a reference after it was invalidated due to a std::vector resize.
These kind of errors should be compile time errors, rather than hard to trace runtime errors.
How does your company go about changing to memory safety? Are new projects / libraries written in Rust for example? Do projects / libraries get (partially) rewritten?
It is very hard to reason about lifetimes and they can eat you. We have a lot of guidelines and strategies to simplify it, but it still just isn’t amazing.
What the cows eat matters for how milk tastes too. Cows can get sick. Udders can get infections. Milking processes (machinery) and its ease of cleaning can vary. Bacteria is everywhere. Pasteurization is a cheap, effective and has no real drawbacks. This whole raw milk thing is just silly and has become political for some silly reason.
Part of the reason for this is likely due to customers preference to have CUDA available which TPUs do not support. TPU is superior for many use cases but customers like the portability of targeting CUDA
My limited understanding is that CUDA wins on smaller batches and jobs but TPU wins on larger jobs. It is just easier to use and better at typical small workloads. At some point for bigger ML loads and inference TPU starts making sense.