My two cents on paid search engines: yes so much awareness around privacy these days. so there might be a potential shift in user behavior, which opens up a big market. however, i am skeptical because people are used to free search engine that surfaces good results.
so privacy is great value prop. but not so much that i am willing to pay for it while living with worse search quality.
Another example of Google giving much data away is 50 trillion digits of pi [1], which contains about 42 TB of data (decimal and hexadecimal combined).
Google Cloud Storage. The files could be dumped as tfrecord in a bucket with "requester pays". So anybody could reproduce it using the open source code, by paying for the costs incurred to move the data from GCS to the training nodes.
It's interesting. It would take ~six years for the Z8 to break even compared to AWS, but traffic into and out of the machine would be $0, and I don't think you're running directly on the metal with AWS, so performance would probably be a bit higher. And then there's storage - I configured, uhh, 120TB of a mixture of SSDs and HDDs. I'm not even going to try and ask AWS for a comparible quote there.
I may or may not have added dual Xeon Platinum 8280s to the Z8 as well. :P
(They are definitely going to exceed their storage quotas.)
I want to see how well weights for these models compress, but it will take me some time to run this code and generate some. I'm guessing they won't compress well, but I can't articulate a reason why.
Is this because they are afraid of the model misused, like used for generating fake reviews? It is frustrating that I've been hearing great news on NLP but am able to try none of them myself.
It's because the model weights are the valuable thing here. The fancy new architectures are nice and everything, but transformer models are a dime a dozen these days. Seems like they're using this as an example to point at and say "Hey, look at us, we support open source!", whereas unless you're willing to go ahead and spend a small fortune on compute (possibly using their GPUs), these models are somewhat useless.
is the "retrofit" strategy living in the past? living in 2021 it seems a bad choice to buy gasoline cars. Most new cars coming out will have some kind of driving assist (L2 autopilot).
It's the opposite. I already own a compatible gas car. Instead of wasting resources on a new car, I can just retrofit the one I already own.
Also, I desperately want an electric car. But I need a minivan because (post-pandemic) I'm often driving six people around who are elderly or children and can't climb into an SUV.
There is no such electric van. This is the only way I can get "autopilot" in a van.
I wonder how clubhouse would monetize its traffic. most social networks had to make the advertisement presented in similar ways to the content, e.g. twitter, instagram, tiktok. but I can't imagine clicking on and listen to a "conversation" that advertises a product...
I did web-scraping professionally for two years, in the order of 10M pages per day. The performance with a browser is abysmal and requires tonnes of memory so not financially viable. We used them for some jobs, but rendered content isn't a problem, you can also simulate the API calls (common) and read the JSON, or regex the script and try to do something with that.
I'd say 99% of the time you can get by without a browser.
Rendering the page in Puppeteer / Selenium and then scraping it from there sounds like a lot easier than somehow trying to replicate that in your scraper?
If they're really being generated client-side, you're free to generate them yourself by any means you want. But also, that's a strange thing for the website to do, since it's applying a security feature (signatures) in a way that prevents it from providing any security.
If they're generated server-side like you would expect, and sent to the client, you'd get them the same way you get anything else, by asking for them.
I'm not sure what's your point. Of course you can replicate every request in your scraper / with curl if you want to if you know all the input variables.
Doing that for web scraping purposes where everything is changing all the time and you have more than one target website is just not feasible if you have to reverse engineer some custom JS for every site. Using some kind of headless browser for modern websites will be way easier and more reliable.
As someone who has done a good bit of scraping, how a website is designed dictates how I scrape.
If it's a static website that has consistently structured HTML and is easy to enumerate through all the webpages I'm looking for, then simple python requests code will work.
The less clear case is when to use a headless browser vs reverse engineering JS/server side APIs. Typically, I will do like a 10 minute dive into the client side js and monitor ajax requests to see if it would be super easy to hit some API that returns JSON to get my data. If reverse engineering seems to hairy, then I will just do headless browser.
I have a really strong preference for hitting JSON apis directly because, well, you get JSON! Also you usually get more data then you even knew existed.
Then again, if I was creating a spider to recursively crawl a non-static website, then I think Headless is the path of least resistance. But usually, I'm trying to get data in the HTML, and not the whole document.
>If they're really being generated client-side, you're free to generate them yourself by any means you want. But also, that's a strange thing for the website to do
what??
Page loads -> Javascript sends request to backend -> it returns data -> javascript does stuff with it and renders it.