I think it's interesting to think about this question of open source, benefits, risk, and even competition, without all of the baggage that Meta brings.
I agree with the FTC, that the benefits of open-weight models are significant for competition. The challenge is in distinguishing between good competition and bad competition.
Some kind of competition can harm consumers and critical public goods, including democracy itself. For example, competing for people's scarce attention or for their food buying, with increasingly optimized and addictive innovations. Or competition to build the most powerful biological weapons.
Other kinds of competition can massively accelerate valuable innovation.
The FTC must navigate a tricky balance here — leaning into competition that serves consumers and the broader public, while being careful about what kind of competition it is accelerating that could cause significant risk and harm.
It's also obviously not just "big tech" that cares about the risks behind open-weight foundation models. Many people have written about these risks even before it became a subject of major tech investment. (In other words, A16Z's framing is often rather misleading.) There are many non-big tech actors who are very concerned about current and potential negative impacts of open-weight foundation models.
One approach which can provide the best of both worlds, is for cases where there are significant potential risks, to ensure that there is at least some period of time where weights are not provided openly, in order to learn a bit about the potential implications of new models.
Longer-term, there may be a line where models are too risky to share openly, and it may be unclear what that line is. In that case, it's important that we have governance systems for such decisions that are not just profit-driven, and which can help us continue to get the best of all worlds. (Plug: my organization, the AI & Democracy Foundation; https://ai-dem.org/; is working to develop such systems and hiring.)
making food that people want to buy is good actually
i am not down with this concept of the chattering class deciding what are good markets and what are bad, unless it is due to broad-based and obvious moral judgements.
At first glance it feels like the most effective way to game this system is to grind user credit through aggregate low polarization support on fairly neutral low impact posts, then strategically 'spend' on higher profile polerizing posts. Is that a fair 'red teaming' observation?
Yes I think this actually could work. Community Notes has a basic reputation system: users need to "Earn In" by rating notes as "Helpful" that are ultimately classified by the algorithm as helpful. Once enough attackers earn in, they can totally break the algorithm.
Breaking it is not as simple upvoting a lot of, say, right-wing or left-wing posts though. The algorithm will simply classify all the attackers as having a very positive or negative polarization factor, and decide that their votes can be explained by this factor.
What would work is upvoting *unhelpful* posts. I have actually simulated this attack using synthetic data and sure enough it totally breaks the algorithm. I write about it in this article: https://jonathanwarden.com/improving-bridge-based-ranking/
Oh hey, I came across your Social Protocols groups while doing my regular rounds for Polis-related projects a few months ago, when I found Propolis! Was trying to figure out why your name was familiar-ish :)
There's also a Polis User Group discord: https://link.g0v.network/pug-discord It's pretty low-key lately, but high density of potentially-aligned ppl. I am hoping to restart the weekly open calls for prospective Polis facilitators and self-hosters, in case you're interested to log in.
Thanks for your posts by the way! I am jealous of your output -- I tend to have a few calls/meetings about Polis per week, but am not so great at producing clean artifacts like this :)
The reasoning was: coming up with (and answering) yes-no questions is more effort and a higher entry barrier for participation than just posting anything and having up/downvotes - like in a social network. Requiring this formalization of all content on a platform creates an entry barrier, e.g. people need to formulate what they want to post as a yes-no question. At the same time, it disallows content, which does not fit the yes-no question model.
Our big insight was: We can drastically simplify the user interaction and allow arbitrary content, but keep the collective intelligence aspect. That's achieved by introducing a concept similar to community notes, but in a recursive way: Every reply to a post can become a note. And replies can have more replies, which in turn can act as notes for the reply. Notes are AB-tested if, when shown below a post, change the voting behavior on the post. If a reply changes the voting behavior, it must have added some information, which voters were not aware of before, like a good argument.
If you can provide a script that takes in an HTML file and provides an image ready for rendering, that would be amazing. Then I can automatically take any website and have a cron job that dumps the result into a shared dropbox link where it can be used by the screen.
> If you can provide a script that takes in an HTML file and provides an image ready for rendering, that would be amazing.
Yea, that's something I have been trying to build, but it's surprisingly non-trivial. There are a bunch of headless browser options, but I haven't found a good way to tell them: "Render the page in X width and Y height and then take a screenshot".
That seems like a problem that should have 100 open source solutions for it, and I am sure there are some that work really well! But I personally haven't found one yet.
At least that is what I use to do for screen testing for some of our low-hanging-fruit QA.
At some point I rewrote it in puppeteer and it was as simple as the above line.
The screenshot results in being the X/Y size.
I'd be interested in why this doesn't work in your usecase.
I made something almost exactly like this before. I needed to convert svgs to pngs and have them display the same way they looked in the browser. It turned out that spinning up chromium and taking a screenshot was the easiest thing way to do that. I think I used puppeteer.
It feels fairly reasonable imo to specify something like "this uses phantomjs with the following screen size" and just say peoples work has to fit that.
It's worth knowing that, according to some recent research, taking a topical antibiotic like this might have permanent impacts on microbiome in your nose (and lead to drug resistance.)
Yes, but systemic antibiotics are substantially worse in this regard as they change your entire systemic micro biome, and a chronic sinus infection is a permanent highly negative micro biome in your nose. Sometimes it’s better to hit the reset button than endure misery for the rest of your life :-)
Just to add this, my ENT prescribed a nasal probiotic prior to my surgery due to staph risk. I used it for 6 months prior to the surgery and now I continue to use it because, either it's a placebo or something special, it seems to improve the airflow in my nose beyond what I'd expect from just squirting some buffered water up my nose. Additionally, I seem to have fewer head-colds that also seem to resolve more rapidly (I have two small children, so my house is literally an incubator for infections). This is the product: https://liviaone.com/collections/probiotics/products/probiot... in case you were curious.
Fascinating, this led to this projects which is pretty interesting. https://www.modos.tech/blog/modos-paper-monitor
Seems like a very exciting effort and product but I'm not sure if it's still active.
You can use the "Make spoken audio from" and "play audio file" actions available to Shortcuts. I was able to get it to play from the Mac dock using this method.
> Sure, but that's irrelevant. Whether or not the user understands the answer they posted is not the concern of the site.
Well, that's unfortunate. Then again, I guess that's a logical conclusion of the "safe harbor" for serving any user-submitted content: Stack Exchange only does the most cursory moderation, and the rest is caveat readator
I think this gets almost all the way there but not quite — there is one more vital point:
How we act depends on our environment and incentives.
It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).
The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.
So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!
But there are communities that make it work and I believe these are negatively affected by general rules we try to establish for social media through some systems.
I don't believe any system can be a solution, it isn't a requirement for a lot of communities either. I don't know what differentiates these groups from others, probably more detachment from content and statements. There is also simply a difference between people that embraced social media to put themselves out there and ghosts that have multiple pseudonyms. Content creators are a different beast, they have to be more public on the net, but that comes with different problems again.
I believe it is behavior and education that would make social media work, but not with the usual approaches. I don't think crass expressions with forbidden words or topics are a problem, on the contrary they can be therapeutic. Just saying because this will be the first thing some people will try to change. Ban some language, ban some content, the usual stuff.
- by “failure of algorithm”, the vocal minority actually mean “lack of algorithmic oppressions and treatments according to alignments of a speech with respect to academic methodologies and values”.
- average people are not “good”; many are collectivist with varying capacity of understanding individualism and logic. They cannot function normally where constant virtue signaling, prominent display of self established identities, said alignments above, are required, such as on Twitter. In such environments, people feels and expresses pain, and makes effort to recreate their default operating environments, overcoming systems if need be.
- introducing such normal but “incapable” people - in fact honest and naive and just not post-grad types - into social media had caused the current mess, described by the vocal minority as algorithm failures and echo chamber effects, and by the mainstream peoples as elitisms and sometimes conspiracies.
Algorithmically oppressing and brainwashing users to align with such values would be possible, I think(and sometimes I’d think about trying it for my interests; imagine a world where every pixel seems to have had 0x008000 subtracted - it’s my weird personal preference that I don’t like high saturations of green), but an important question of ethics has to be discussed before we’d push for it, especially with respect to political speeches, I also think.
How do you go about determining what is collaborative or "bridging" discourse, though? That seems like a tricky task. You have to first identify the topic being discussed and then make assumptions based on past user metrics about what their biases are. Seems like you would have to have a lot of pre-existing data specific to each user before you could proceed. Nascent social networks couldn't pull this off.
This also seems to be gameable. Suppose you have blue and green camps as described in the linked paper. And if content gets ranked high when it gets approval from both blue and green users then one of the camps may decide to promote their opinion by purposefully negatively engaging with the opposite content in order to bury it.
This seems no different from "popularity based" ranking mechanisms (e.g. Reddit) where the downvote functionality can be used to suppress other content.
Maybe the assumption is that both camps will be abusing the negative interactions? But you can always abuse more.
I think it's interesting to think about this question of open source, benefits, risk, and even competition, without all of the baggage that Meta brings.
I agree with the FTC, that the benefits of open-weight models are significant for competition. The challenge is in distinguishing between good competition and bad competition.
Some kind of competition can harm consumers and critical public goods, including democracy itself. For example, competing for people's scarce attention or for their food buying, with increasingly optimized and addictive innovations. Or competition to build the most powerful biological weapons.
Other kinds of competition can massively accelerate valuable innovation. The FTC must navigate a tricky balance here — leaning into competition that serves consumers and the broader public, while being careful about what kind of competition it is accelerating that could cause significant risk and harm.
It's also obviously not just "big tech" that cares about the risks behind open-weight foundation models. Many people have written about these risks even before it became a subject of major tech investment. (In other words, A16Z's framing is often rather misleading.) There are many non-big tech actors who are very concerned about current and potential negative impacts of open-weight foundation models.
One approach which can provide the best of both worlds, is for cases where there are significant potential risks, to ensure that there is at least some period of time where weights are not provided openly, in order to learn a bit about the potential implications of new models.
Longer-term, there may be a line where models are too risky to share openly, and it may be unclear what that line is. In that case, it's important that we have governance systems for such decisions that are not just profit-driven, and which can help us continue to get the best of all worlds. (Plug: my organization, the AI & Democracy Foundation; https://ai-dem.org/; is working to develop such systems and hiring.)