> Google's existence has spawned a horde of SEO spam sites that dilute and push down real content on the internet
They don't push real content "down the internet" they push it down the google rankings.
And while Google search is not as great as it was, it remains a strong contender for the best way to find stuff on the internet.
I don't think the internet would be better off without it.
You can however make good arguments for Google News, "Instant Answers" in Search, and other products, taking (impressions) from the internet.(And Chrome, Chrome worries me.)
> The question is, does it fulfill a cookie-using website's requirements under GDPR without additional UI
It cannot. That's the whole point of the GDPR. It forbids tracking without informed, explicit user consent. Users cannot be informed or agree with the header setting.
Sites can, of course, not track users, or not track users who set do not track. They don't want to, that's why they try to annoy and/or mislead anyone into agreeing with their horrible banners.
(Using Cookies for site settings or even logins can be done without explicit consent and without banners)
The problem is that regulators were influenced by industry then. The proper regulation would have required that the default state be that users are shown no consent banners without explicit action and also not tracked.
But there is a need for clarification here for the most often encountered consent case: web sites.
Basically the regulation could say: you must have consent to collect data, but you must ALSO observe specific standardized method X of of blanket disallowing all consent in specific contexts. For example, "if do-not-track is used in a web browser, then the user should not be shown a consent dialog but instead provided the service as if they had rejected the consent dialog".
I realize that regulators (for good reason!) are very reluctant to specify specific technologies. It's not their home turf, and it's likely to be quickly outdated. But I'm ready to accept that this would be a time when there is a good reason to make an exception to that rule.
I sort of agree with you on that. I guess I'd like to see it not in the main body of the regulation, but as an additional law/regulation/addendum that reflects the current state.
> So what you paid for as an investor was a bunch of R&D research on how to replace natural and free sunlight with LEDs. (The fact that there are investor decks out there pitching that energy costs could be offset by investments in solar panels is laughably hilarious.).
Some of those already existing greenhouses are using LEDs to augment natural sunlight. [0]
So it doesn't seem too far fetched to me. The gamble wasn't to replace (free) sunlight with (paid) LEDs. The core idea is stacking farms on top each other, hoping the reduced land (area) requirement vs increased energy cost might pay off.
Turns out land isn't that short, transportation not too expensive and energy not that cheap.
What's crazy is that all these variables were known ahead of time.
What is the cost of real estate in rural upstate New York? It's really not that bad! (About $10k per acre, based on a brief search, for land about 280 miles away from Manhattan. A truck could make a delivery in six hours or so.)
So why spend millions in capital expenditures on fancy stacked farms with high tech intense lighting, in order to cut a few tens of thousands of dollars in real estate costs?
The argument was that logistics costs were high. That's kind of true if you're on the east coast, and your alternative is importing from Mexico or California.
It's not true at all if you can get produce from a conventional hydroponic operation six hours away by truck.
Moreover, it's not like conventional hydroponic operations didn't exist, and this was some sort of tail risk that was unexpected. The conventional hydroponic operations already existed! They've been in business for years. All the founders or investors had to do was drive a few hours to go visit them. Or pick up the phone.
> So why spend millions in capital expenditures on fancy stacked farms with high tech intense lighting, in order to cut a few tens of thousands of dollars in real estate costs?
and ongoing maintenace. I'd imagine harvesting crops from flat piece of land is also cheaper than from vertical stack.
A few million, and it can farm thousands of acres. How much does one of these vertical farming facilities cost, and how much food does it produce compared to those thousands of acres? Then add the cost of replacing all the sunlight blocked out by the roof.
> They've been in business for years. All the founders or investors had to do was drive a few hours to go visit them. Or pick up the phone.
The aim for venture capitalists is necessarily to make risky, long-tail bets that have little chance of paying off; and that incumbents find too risky.
Sure, this bubble has popped, but what success in this case might have looked like is grocery stores selling mass-produced microgreen salads (that can't be reasonably transported without bruising) which were grown upstairs.
It's a reasonable criticism that VCs don't understand agriculture well enough to estimate the risk of "moonshot" style initiatives in that area. But criticizing them for "not picking up the phone and talking to existing operations" is kind of missing the point. Of course the existing operators are going to say it won't work.
The reason for the long-tail bets being improbable is important, though!
If the reason is because it takes lots of capital, or because there are network effects (e.g., AirBnB, eBay), or because it's a winner takes all market, or some key piece of technology has to be proven out, then sure, shoot your shot, someone will get the bullseye and win the day and it'll be a cakewalk raising the next fund.
But if it's improbable because nobody wants it, that's not the same thing.
That's why a bet being improbable should not be the reason to invest in something.
A VC may look at a bet and say, "sure, the win condition is unlikely, but lots of people will want this, so if it wins, it's worth billions." And maybe they write a check.
But if another VC looks at a deal and says "it's not likely to work, and also, even in the optimistic case this is worse than existing solutions," then there is no market problem being solved. This second VC will likely be leaning more on their management fee than their carry to pay the mooring fees for their yacht.
And a phone call to that farmer would have told them that while microgreens don't ship well over very long distances, they do fine in climate controlled trucks over a couple hundred miles. Part of a VC's job is to de-risk investment decisions by validating key assumptions.
> But criticizing them for "not picking up the phone and talking to existing operations" is kind of missing the point. Of course the existing operators are going to say it won't work.
This is true but irrelevant. The important part is asking them why it won't work and then seeing if that's correct. VCs are supposed to invest in risky things, not futile things.
Of course, as with WeWork, there's a big gap between what VCs are supposed to do in terms of due diligence and what they actually do.
VCs had so much money that they couldn't do that work. They have to invest all that money someplace no matter what, even if a moment's thought would show it was a bad investment because they need to show a return on investment, and sitting on a lot of cash isn't showing a return.
This is one way the little guy can beat the market: you are allowed to sit out the market for obviously flawed things. Beware though, that thinking is also why many little guys didn't get rich on Amazon, Microsoft, Apple, Google, and the other big names that at one time looked like bad investments.
In the last decade or so, VCs have been less interested in finding the next Facebook or Apple or Google, but rather something they could sell as such. So they always seemed to fund the same kind of founder, the same great kind of vision for the future. They did so long enough to keep the lights on until an IPO, or SPAC as of late. Now that this route is basically dead, they seem to reevaluate their portfolios differently. And that might actually include the non-marketing side of the due dilligence nobody seems to have done so far.
Sure, risk profiles in a a high-interest rate environment function differently from those in a "here's some money at ~0% interest" one. And I wouldn't be surprised by ag-tech being off the table for a few years in portfolios due to the fact that it's not well understood by many VCs.
Existing clinical lab operators said that Theranos wouldn't work. And they were 100% correct.
Physicists and audio engineers said that UBeam wouldn't work. They were also correct.
If existing operators say that something won't work because it's too hard or because customers won't like it then they could be wrong. But if their statements are backed up by hard science and economics calculations then only a moron would disbelieve them.
Could it be the idea is simply way ahead of its time? Nuclear fusion could result in much cheaper energy, green hydrogen could bring down transportation costs, and human population ballooning to massive levels would drive up the value of usable land… but maybe all that is 30 years out.
As far as I understood, one of the questions asked was also can you increase the hours of light per day to get more rapid growth. Imagine plants with 22 hour grow periods vs 14.
> I was surprised to see "sprawl" called out in the op because that is directly opposed to the flattening of building restrictions and the deregulation that is being pursued by both the state and housing activists
Strange. I understand sprawl (especially suburban sprawl) to be the direct result of housing regulations. The "natural" ("unregulated") trend seems for cities to become denser and for buildings to become higher (e.g. single family homes getting replaced by multi apartment buildings). Density being the opposite of sprawl.
More housing in an already populated area leads to denser populations leads to less sprawl.
"The "natural" ("unregulated") trend seems for cities to become denser and for buildings to become higher (e.g. single family homes getting replaced by multi apartment buildings)."
I agree with that.
Further, I would like to see cities like San Francisco and Oakland and San Jose become denser/taller in just the ways you are describing.
However, the article speaks of the entire Bay Area which runs the gamut from Point Reyes Station to Atherton to Strawberry to Dublin - and everything in between.
I don't live in a residential area and I have no financial exposure to residential housing in the Bay Area - so this is purely academic for me - but I really don't see why our inability to make San Francisco denser means people in Atherton (for instance) can't decline apartment high rises.
That outcome is, in fact, sprawl.
Further, the impacts of that sprawl go beyond the aesthetic: it starves infrastructure advances that depend on a critical mass of density.
Every unit of housing that gets distributed outside of the city center is a missing unit of density that would have gotten us closer to another subway, another terminal connection, another tunnel, another rebuilt blighted area, etc.
> That's not a data protection problem, however. No data protection law forbids digitization of government services. It's a common excuse, though.
It's a super weird one as well. There's no extra special data protection rules for digital data vs. data on paper.
Are they admitting they violate privacy laws already? Or do they want to change the information flow while going paperless? If so, why, and why is it necessary to go digital at all?
In some cases (taxes mostly) digital data has been associated with government-wide identifiers that make profiling citizens across all government agencies possible and easy. Linking even more data automatically and without asking the affected citizen to those identifiers rightfully gets pushback.
Paper often uses the same identifiers, but being paper, isn't automatically linked to all the other papers in other filing cabinets somewhere...
Yup, it should not be surprising that the 1978 SAFARI scandal that resulted in the creation of the French data protection authority happened when the government attempted (and had to stop) to link various then paper databases into a centralized one through a single social security number.
> Money, it turns out, wasn’t the point after all.
Not the only point. Asking them if they wanted to do the sales team job without compensation would get you the obvious answer. They wouldn't. Money is the point, it's just not the sole point and it cannot be stretched arbitrarily. "Money for rent", "money for retirement", "money for vacation" and "money for a second home" are not the same.
Interesting. So it only shows images that were transformed / mixed to get the output, but does not show images used to learn how to transform / select them?
Sounds very much like a human would do it.
If I 'know' how to recognize saurik and I know how anime is supposed to look like, I can check my digital photo library for a picture of saurik and than use that picture as a template to draw an anime version of saurik. If someone later asked me what pictures I used the photo is the only one I'd present. Not the thousands of anime pictures I have seen teaching me what anime looks like, nor the picture my eyes took meeting saurik.
I think saying exactly what an ai of sufficient complexity is doing is a mater for philosophy more than science. But idk if transforming or mixing is how I’d describe what this is doing. In particular it truly does not have a complete representation of any of the images it was trained on. They just wouldn’t fit. It does of course have an understanding of how embeddings relate to images that is informed by the images it’s seen so maybe that counts, but I’m not sure if it’s useful in understanding the limitations or how to improve models like it
> i spoke to law profs about this - the analogy which kept coming up is the vcr. initially basically a piracy machine, it brought to life an enormous content market. had it been banned, creators would have been worse off in the long run.
It’s called Sony v Universal, and the legal doctrine for fair use that resulted is a test for “commercially significant non-infringing use”, of which a tool used for inpainting to remove power lines, latent space psychedelic visuals, and photo booth-painterly-style all are.
Imagine if Stable Diffusion was made illegal. Someone accuses me of using this illegal tool for one of these non-infringing uses, that is an image that doesn’t look like anyone else’s image as far as the court is concerned for copyright. I put the image on my website. If the image itself is not at all infringing, then what is the evidence that Stable Diffusion was used? Should the police be issued a warrant to search my private property for proof that I used Stable Diffusion without a shred of evidence or based on a tool that will always have both false positives and negatives?
I do want to clarify that I think stable diffusion and tools like it can engage in illegal copying. For example it will happily produce infringing images of logos and even somewhat random other images https://arxiv.org/pdf/2212.03860.pdf. It seems like it’s devoting an uneven amount of its weights to different images, but I remain unconvinced that’s all it can do, or at least anymore all it can do than for a human artist
This is what happens when you overtrain a model too. Recent developments have allowed partial sets of model weights called LoRAs to be added to the diffusion model. These models can be fine-tuned independently in under half an hour. If you set the learning rate too high, it will start reproducing the source material with extremely high fidelity. This is what overfitting does.
My conclusion is there is an argument to be made for infringement in some cases, but it's based on degrees instead of absolutes. If infringement is defined as "copyrighted works were used in this dataset", then at a certain point (low enough learning rate) it becomes impossible to tell if infringing data was used. You'd be working with weight amounts that are so miniscule they could be rounding errors, yet by that definition would still be infringing.
And since any arbitrary data can be used with some set of keywords, the standard for what constitutes "infringing" changes with each model. As in, it would probably be hard to have a benchmark test that can definitively state "this model violates copyright." Any number of keywords can be trained on to obfuscate the prompt needed to reproduce the data, assuming there was even a high enough LR for the data to be reproduced similarly enough.
I'm unsure if there can ever be one standard for when a set of a bunch of floating point numbers can pass the threshold for constituting infringement. This is applying an absolute standard to a fuzzy algorithm. It's like compressing a JPEG, at some level of compression on the scale a picture of Mickey Mouse becomes unintelligible. But with JPEGs it isn't really useful to have an unintelligible picture of Mickey Mouse. However, it can be extremely useful to have a LoRA with the weights underfit just enough to where the diffusion gives novel outputs.
> historically, creatives have been among the first to embrace new technologies. ever since the renaissance, artists have picked up every new tool as it's become available, and used it to make great things.
> these people aren't 'luddites'
This is just total bullshit. I know plenty of artists who are embracing this technology to make all sorts of things that tools like SD were not designed to do, like psychedelic music videos, etc.
What the author means is that a few loud blue check marks on Twitter who claim to be artists have been tweeting, get ready for it, inflammatory claims.
> I write a lot of C and still find it tricky when I have to go look at some random C codebase
Absolutely. C Code is not necessarily easy to understand. Not at all.
But the C parts itself are? In C you seldom wonder what C does, you wonder what the C code does.
If you work like OP likes, single person projects, small enough to keep almost completely in your head, I can see how C is especially charming.
You never wrangle with C, only with your own code.
They don't push real content "down the internet" they push it down the google rankings.
And while Google search is not as great as it was, it remains a strong contender for the best way to find stuff on the internet.
I don't think the internet would be better off without it.
You can however make good arguments for Google News, "Instant Answers" in Search, and other products, taking (impressions) from the internet.(And Chrome, Chrome worries me.)