I really wanted to love the Common Crawl corpus. I needed an excuse to play with EC2, I had a project idea that would benefit an open source project (Mozilla's pdf.js), and I had an AWS gift card with $100 of value on it. But when I actually got to work, I found the choice of Hadoop sequence files containing JSON documents for the crawl metadata absolutely maddening and slammed headfirst into an undocumented gotcha that ultimately killed the project: the documents in the corpus are truncated at ~512 kilobytes.
It looks like they've fixed the first problem by switching to gzipped WARC files, but I can't find any information about whether or not they're still truncating documents in the archive. I guess I'll have to give it another look and see...
I'd have to check the last crawl settings, but I believe I set the last crawl was set to truncate at 1 MB (response body size, so that could be 1 MB uncompressed or 1 MB compressed depending on what the source web server sent out).
At one point I was tried out a 10 MB limit, but the thing is we try to limit crawls to webpages and few are that big, but occasionally we'd hit sites ISDN-speed connections that would slow down the whole thing.
For the next crawl, we'll mark which pages are truncated and which aren't (an oversight in the last crawl) so at least you can skip over them.
Also, hopefully you'll find the new metadata files to be a little clearer. We switched over the same format Internet Archive uses and it contains quite a bit more data (xpath truncated paths for each link for instance).
I love common crawl, but as I commented before I still want to see a subset available for download, something like the top million sites or something like that. Certainly a few steps of data, say 50GB 100GB and 200GB.
I really think a subset like this will increase the value as it would allow people writing search engines (for fun or profit) to suck a copy down locally and work away. Its something I would like to do for sure.
Ideally beyond the top sites, these subsets would be available as verticals, so that people can focus on specialized search engines.
While it's nice to have generalist search engines, it would be even better to be able to unbundle the generalist search engines completely. Verticals such as the following would be nice:
1) Everything linux, unix and both
2) Everything open-source
3) Only news & current events
4) Popular culture globally and by country
5) Politics globally and by country
6) Everything software engineering
7) Everything hardware engineering
8) Everything maker community
9) Everything financial markets
10) Everything medicine / health (sans obvious quackery)
11) etc.
Maybe make a tool that allows the community to create the subset creation recipes that perform the parsing out of data of a certain type and that the community forks and improves over time.
The time to create a generalist search engine has sailed, but specialist search engines is total greenfield.
You don't usually download this data - you process it on AWS to your requirements.
Seriously - they give you an easy way to create these subsets yourself[1]. That is a much better solution than them trying to anticipate the exact needs of every potential client.
I guess what I was suggesting is "given enough eyeballs, all spam and poor quality content is shallow"
There is definitely a benefit in using the community to identify valuable subsets and then individually putting your energy towards building discovery/search products around that subset.
would love to have even smaller subsets (like 5gb) that students can casually play around with too to practice and learn tools and algos :) (if it's not too much trouble!)
The text extracts and metadata files are generated off individual WARC files, so it is pretty easy to get the corresponding sets of files. For the above it would be:
Is there any way to get incrementals? It would be extremely valuable is to get the pages that were added/changed/deleted each day. Some kind of a daily feed of a more limited size.
Totally true. Smaller versions are not helpful for just casual/student use; it also helps in code development and debugging. Otherwise, algorithm development gets impeded by scaling issues.
Very cool...though I have to say, CC is a constant reminder that whatever you put on the Internet will basically remain in the public eye for the perpetuity of electronic communication. There exists ways to remove your (owned) content from archive.org and Google...but once some other independent scraper catches it, you can't really do much about it
I think about this from George Santayana's perspective: "Those who cannot remember the past are condemned to repeat it." I feel like we need our past recorded (good, bad, AND ugly). It keeps us civil and humble.
I would be great if common crawl (or anyone else) would also release a document-term index for it's data. If you had an index, you could do a lot more things with this data.
Anyone have a good understanding of the difference between this and http://www.dotnetdotcom.org/? I've seen Dotbot in my access logs more than CommonCrawl, so I'm more inclined to believe they have a wider - but not deeper - spread.
Anybody want to take a guess at what percentage these 2B pages represent out of the total surface web at least? I can't find reliable figures, numbers all over the place. 5 percent?
I've yet to find an excuse to download some of this data to play with. I have a feeling my ISP will personally send around a bunch of suits to collect the bill payment in person if I were to ever go over my 500gb monthly limit by downloading 102tb of data, haha. I would still like to download a subset of the data, from what I've read apparently that kind of idea is already in the works. I just can't possibly think of what I would do, perhaps a machine learning based project.
I'm on Comcast and download around 3TB/month with no problem. But seriously why you should download big data to work with? It's cheaper and faster to do it in 'cloud'!
We have switched the metadata files from JSON to WAT files. The JSON format did not allow specifying the multiple offsets to files necessary for the WARC upgrade and WAT files provide more detail.
Is there a way we can access the data(small subet say 30-40GB's) without having an AWS account(as it requires a credit card, I'm a student i don't have any) ?
It looks like they've fixed the first problem by switching to gzipped WARC files, but I can't find any information about whether or not they're still truncating documents in the archive. I guess I'll have to give it another look and see...