I’d argue that is not the most ideal Prolog solution. More like it’s simply a recursive implementation of an imperative solution.
For fractals you’ll want to be able to recognize and generate the structures. It’s a great use case for Definite Clause Grammars (DCGs). A perfect example of this would be Triska’s Dragon Curve implementation. https://www.youtube.com/watch?v=DMdiPC1ZckI
I would agree. I was actually hoping to be proven wrong with that example. I have yet to see anything other than a constraint program that looks easier to understand in logic programming, sadly.
Adding late to this, as I didn't get to actually look at this video yesterday.
I would still agree that you can do better than the examples I'm finding, but I am not entirely clear why/how the dragon curve is honestly any better here? The prolog, notably, does not draw the curve. Just being used to generate the sequence of characters that describes it. But... that is already trivial in normal code.
Actually drawing it will get you something like this: https://rosettacode.org/wiki/Dragon_curve#Prolog. Contrast that with the Logo version and you can see how the paradigm of the programming language can make a giant difference in how the code solution looks.
Yes, it should not be crazy to see that a website started by people like Paul Graham and Sam Altman would naturally be full of hyped up VC nonsense. Their entire business model is to hype up these ideas and the startups working on them and then cash out!
I recently left a job at a very different large company with a similar timeframe (a little under ten years). Pretty much everything this author states is related to my experience.
There is nothing all that special about Google. Maybe there was twenty years ago, but that ship has long since sailed. It’s just another large US tech company. Like Microsoft and IBM before it.
For a long time google had cachet as the most engineering friendly big tech firm (which was mostly deserved) and also the place with the highest level of talent (which is more team dependent but also somewhat deserved). You might end up working on ads or some other inane thing, but at least your engineer coworkers would be really good. They're still riding that wave to some degree because they haven't scared away all their top talent yet.
> It’s just another large US tech company. Like Microsoft and IBM before it.
This is just a hyperbolic statement that should not be taken seriously at all.
Look, Google isn't some fantasy land that some people might have lauded it as once upon a time, and it isn't unique in terms of pay or talent, but it is certainly at the top echelon.
I did an interview loop for high level IC at both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
IBM has never a company where engineers could rise to the same ranks as directors and higher with a solid IC track.
Is Google special compared to Apple/Netflix/Meta? No. Is it special compared to Microsoft, IBM, and any non FAANG or a company that isn't a top deca-corn? Yes.
Microsoft and IBM used to have a similar extremely talented teams. IBM ran research centers full of the world's top Ph.D. The innovation that happened at those places easily rivals Google's.
It's a similar trajectory is what people are saying. When Google was small and everyone wanted to work there they could take their pick of the top talent. When you run a huge company you inevitably end up with something around the average. I.e. all those huge companies that pay similar wages and do similar work basically end up having similar talent +/- and within that there are likely some great people and some less than great people.
Yes! It’s sad how ignorant of IBM and US technology industry history some of these comments are. Then again, I suppose every generation does a lot of its own “this time we’re different” myth making. Not everyone has the wisdom to see the broader context.
Indeee. I think because for younger generation is physically impossible to have experienced it, while for the older generations it's complicated to get into a disruptive startup.
Obviously people could read about the past, but sometimes that's asking too much, they are busy creating "the future".
>I personally know people who moved up the ranks there to director and above,
I didn't mean that engineers can't become directors, I meant that IBM didn't have a track for top ICs to get paid more than directors and still not be on a manager track.
> ...both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
This is maybe the third time I've heard this mentioned here on HN, so now I'm curious: What specific kinds of differences?
I imagine there might be a certain kind of prejudice against Microsoft and its employees, especially for "using Windows" or whatever, which I've found often unfairly coloring the opinions of people from Silicon Valley that are used to Linux.
If you don't mind sharing, what specific differences did you notice that gave you a bad impression of the Microsoft team and such a good impression of the Google team?
Overall talent level. Almost everyone I've interviewed with at Google impressed me, as well as came across as thoughtful and kind.
I did interviews with many teams at Microsoft (9 technical interviews total) and the only person that impressed me is now at OpenAI.
Every single interview question I got at Microsoft was straight out of intro to CS /classic Leetcode.
They would straight up ask "find all anagrams", "string distance", "LCA of a tree".
Google instead disguises many classic CS questions, so it takes a lot more thinking. Microsoft seemed to just verify that you can quickly regurgitate classic algorithms in code.
I'm sure there are some great teams at Microsoft: but because each division/org is much more silo'd I think it's more likely a team has a lower overall bar.
Google makes everyone pass through a hiring committee and you're interviewed by people that have nothing to do with the team you might end up on. Meta is similar. Amazon has the team interview you, but they also have bar raisers come from other teams.
Microsoft seems the outlier here that someone can get on a team with only interviewing with people on said team.
There's a bit of contradiction in the article. The main objection is the author's feeling of uneasiness in open spaces. "liminal spaces" created by, for example, large parking lots. The author then complains that these wide open spaces are not "walkable". What? They are certainly walkable by their very design! What they are not, it seems is the real objection, are cozy spaces lined with tacqueiras and coffee shops.
Walkable in the sense that I can meet most of my needs via walking.
I live 2 blocks away from a grocery store, for example. There is a 24 hour pharmacy roughly the same distance, and a couple coffee shops + a gas station + McDonalds not too much further away.
There is an expansive set of tennis courts and beach vollyball area within walking distance, and next to it is a great park & playground. A bit further in the other direction is an elementary school with playgrounds, and beyond that, at the edge of what I'd consider walkable, is a splash park.
Get on a bike and the offerings double.
Meanwhile, my parents are 20 minutes away from anything outside of a single gas station. Plenty of nice houses, and at least one school and fire dept., but they basically have to drive into town -- even though they're surrounded by houses -- just to snag a simple coffee or quick grocery store run.
the article states this person is walking in commercial area. that is what is creating the liminal space. she’s not strolling through the woods. she’s basically just bothered that her neighborhood isn’t gentrified enough yet.
This article predates the resignation of SawyerX (due to a lot of the abuse and misery heaped on him), who was the Perl release manager (aka "Pumpking") that was in charge of Perl 7.
The short version of it was that there was a bit of a power play by people who felt ignored and wanted a bigger part of what was felt to be an important development.
I'd say you're being a bit overly cynical. There's plenty of good news in Perl too. specifically, since you mentioned smartmatch, that's been pretty well fixed as of a couple of weeks ago with Switch::Right https://metacpan.org/dist/Switch-Right.
Switch modules have been around for years. There's Moo and Try::Tiny too. Some of this stuff should really be part of the language by now. Same with exporting functions.
Sure. I've been using the language for more than 10 years but this is dumb: different modules for every trivial feature that should be a language feature instead. Smart match is a perfect example. Smoothes off nothing. I'll be off using Ruby, thank you.
You manage feature differences one way or another. If you like choosing rbenv vs rvm vs asdf and then using them to manage your ruby versions and gem dependencies rather than having in-band switches in a single interpreter, great, you're welcome. I could even see someone making a case that it fits neatly within an org where systems/devops folks take more of the environments/dependencies division of labor.
If what you really like though is the charge you get out of just saying "this is dumb" while indulging the privilege to not even notice that you did a repeat performance of unsupported shoulds vs worthwhile tradeoffs, though, well, maybe you should examine that.
I use the system Ruby and don't have to worry too much about rbenv and rvm. 2.7 and even 3.0 is well supported. That's what I also did with Perl, except when I used MacOS which was a pain because of modules that used C libraries like LibXML. On Linux we can also use containers without worrying about speed penalty. There are sufficient solutions and okay tooling. Ruby's also got not one but two JIT compilers right now.
* the module system literally just runs a script, so it can do -anything-. As a result there are 3 or 4 competing install systems all with their own cruft, some defined entirely using Perl as config, some using YAML. You need to have all of them installed.
* Of these, Module::Build is a common one written by someone who completely overengineered it, and it installs hundreds of dependencies, even though all it really does is just copy some files.
* Install scripts can do stuff like ask you interactive questions that prevent automated installation, which is a constant hindrance when packaging Perl modules e.g. to RPM
* Perl leaves literally everything up to external modules, including exporting functions or even defining module dependencies (e.g. 'use base', 'use autoclean', 'use Exporter' ...) and often the module config is written entirely in Perl rather than YAML or JSON file, so trying to do anything clever (like add IDE/language server support) is an absolute nightmare.
* The client to install new modules initially asks about 20 questions and does a slow mirror test, making it difficult to use in automated settings. Luckily someone wrote cpanminus which literally does exactly what you want - installs the damn package.
For fractals you’ll want to be able to recognize and generate the structures. It’s a great use case for Definite Clause Grammars (DCGs). A perfect example of this would be Triska’s Dragon Curve implementation. https://www.youtube.com/watch?v=DMdiPC1ZckI