https://www.billhartzer.com
My personal blog, I post whenever I feel there's something post-worthy. Lately it's been about stuff I'm passionate about that's related to what I do professionally.
I remember Crazy Eddie's very well. I was a teenager in the 1980s, and I still have the portable Sony Discman I got there for $100. I mowed a lot of lawns in order to save up for that Sony Discman from Crazy Eddie's.
This post went viral back in 2020. I remember seeing it when he first did this using the wagon.
This actually was inspiration for a new "service" that's out there now, which is not publicized that much (because of obvious reasons).
There person running the service bought over 100 iPhones. They use Team Viewer to control all of them. The 100+ phones connect to 5 different hotspots, for internet access. They open up Google Maps on each phone and set the driving directions to a certain local business (the business paying for the service). Then, they literally drive to the business with all of the phones. This gets the business better Google Maps rankings, since it's "popular". Also, reviews are left using some of the accounts, as well as photos.
exactly. this post is a kind of: let's make new google! i know how to do it better (while ignoring 1000 other factors outside of my bubble).
Even if Dunning-Kruger effect is debunked, this case shows that people work this way.
Patterns was stolen around the same time as Perl.com was stolen. I suspect it was the same individual that stole both domains as the same time, along with a few others as well.
I will never forget that day. It was Saturday morning, around 8:00am CST, and my wife and I were laying in bed, I was barely awake. We were living in East Texas at the time. It was the loudest 'boom' I've ever heard, and it shook the house a bit.
Can you point to your source? From a quick glance on their documentation, it seems like you got it backward.
> A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.
They won't crawl it, but it they already did crawl it or if there's a link to it, it will still be part of their index (thus search results).
They even go further on their noindex page, by saying that it need to be crawlable (thus not part of the robots.txt) so that they can see the no index directive on the page...
> Important: For the noindex directive to be effective, the page or resource must not be blocked by a robots.txt file, and it has to be otherwise accessible to the crawler. If the page is blocked by a robots.txt file or the crawler can't access the page, the crawler will never see the noindex directive, and the page can still appear in search results, for example if other pages link to it.
Because of stuff like this Google doesn't rely on links much any more, but these scammers and SEO nuts aren't going to stop because of that fact. To them, they just need more links to make up for it.
They will never stop doing this. They have stopped relying on it as much (by changing how sites are scored based on perceived authority), but it just shifts the manipulation techniques accordingly.
Google’s original crème de la crème is PageRank: the idea that links to your page have more weight than keyword spam. They don’t seem to want to abandon that idea.
They can't; the web is too large to work without it, and it is (legitimate) links that make it possible to distinguish a well-known site from a pop-up plagiarist that just copies all of the text. But for it to work on the modern web you need to be able to distinguish high-quality links from bogus link-farm links.
It's more that they seem unwillingly to admit that purely automated methods don't achieve optimal results when they're being fought by equally automated scams. They could bump their overall ranking quality significantly by blacklisting known bad-actors that scrape real sites and put up hollow shells of barely edited, machine-munged plagiarism, yet do not.
And that’s fine. PageRank is/was a genius idea that should still be used today. But they need to do more. When I search something and half the results are Markov Chain like blogspam full of ads, I have to wonder where the effort is going into those things (definitely not AdSense /s).
If anyone can combat it, Google can. It doesn't seem that hard to me to assign low (or negative) weight to links that suddenly appear on a page that hasn't otherwise changed in years, for example. Or links to the same site on pages that otherwise have nothing to do with each other in terms of content/subject. I'm sure it's harder than that, but all those engineers ought to be earning their six figure salaries.
But how do you translate that into (short term) OKRs and work that engineers can build a career upon as an achievement (for promotion)? I think there is a cultural mismatch between how engineers grow and what they work on at Google versus the quality of the search results. There seems to be very limit upside and huge potential downside to working on this at Google.
That said, I don't work for Google and my conjecture is based on the hand wavy details (from engineers that do/have) posted online.
I heard it didn’t work very well for them. The results weren’t very good or helpful.