> Four others that raised money but with painful recapitalizations that effectively wiped out early shareholders
Does someone have a recommendation on reading that goes deeper on this point? What enables later investors to do this? What can early investors do to protect their investment?
There are a few different ways this can happen. It has to do with seniority of liquidation preferences -- basically, if you recap or you exit for less than your highest valuation, who gets paid first? This blog post is a pretty great summary of what can happen: https://heidiroizen.tumblr.com/post/118473647305/how-to-buil...
I did a little digging because I haven’t followed the news on this. Through a chain of links from the article is https://edworkforce.house.gov/news/documentsingle.aspx?Docum... which is a report from the Committee on Education lead by Chairwoman Virginia Foxx (R-NC) detailing anti-Semitic activity on university campuses.
It seemed to me a lot of the key findings were framed in a political way. I was hoping to see more specific damages that might justify these huge cancellations of grant funding. For example one key finding was “Northwestern put radical anti-Israel faculty in charge of negotiations with the encampment.” I admittedly don’t have time to read the full report but how is the labelling of faculty involved in the negotiation a clear case of discrimination for the violation of Title IV of the Civil Rights Act? Shouldn’t the key findings be more about the harms that were experienced by Jewish students?
It’s a shame this report is being used to justify suspension of research funding.
Any time I hear this administration accuse someone of anti-Semitism, I have to wonder whether it's actual anti-Semitism or simply support for the existence of Palestinians.
I'll probably get downvoted for this. When I started dating a woman from Israel, her stories didn't jive with the stories we hear here. So I started digging into it and understood why she said it was such a mess and there was no way anybody outside of Israel could understand what was going on.
I'm not saying this as a form of both-sides-ism, but an acknowledgment that it's a shitshow and may never be untangled least of by US/EU.
It is a complete shitshow, and one can like Israelis and believe that they are mostly good people, while also believing that what is happening to the Palestinians is super, super wrong.
This resonates with my experience. When a friend is having their first manic episode, it’s hard to get them to recognize that and get the help they need. But after the aftermath of a couple manic episodes, my friend wanted to get help. Even when he had a hard time complying with meds, he was able to recognize when he was heading into another manic episode and get help proactively.
My friend’s family do a few things to mitigate some of the bad decisions that a manic episode brings on. 1) the family has location tracking, 2) the family can pause credit cards, 3) the family can take away car keys.
I don’t expect any of those mitigations will be able to be implemented today in your case. But if your friend is currently going through a bad manic episode then they will probably want to prevent future manic episodes from spiraling. Having a conversation about how to achieve that, and having a mutually defined set of warning signs, will help address the problem.
Yeah seconding this advice. If you really want to help someone who struggles with this, long-term help / working them on strategies to avoid escalating mania & exacerbating it in the future is the highest impact form of help.
Don't agonize over how to handle the situation in the moment too much and don't take anything they say seriously or try to make any decisions, just keep them from hurting themselves and try to steer them towards medical help to the best of your abilities.
My intuition would be that there are certain conditions under which Bayesian inference for the missing data and multiple imputation lead to the same results.
What is the distinction?
The scenario described in the paper could be represented in a Bayesian method or not. “For a given missing value in one copy, randomly assign a guess from your distribution.” Here “my distribution” could be Bayesian or not but either way it’s still up to the statistician to make good choices about the model. The Bayesian can p hack here all the same.
I'm an engineer in this space and would be interested in learning more about your business and why you feel you are unique and under-served. My contact info is in my profile.
I do find that the biggest challenge as a vendor in logistics is highlighted by, "Our business operates in a specific niche and there are no other providers who cater specifically to our industry." This makes it difficult to make software that works for everyone even if we are all in the same industry.
> I asked Firefly to create images using similar prompts that got Gemini in trouble. It created Black soldiers fighting for Nazi Germany in World War II. In scenes depicting the Founding Fathers and the constitutional convention in 1787, Black men and women were inserted into roles. When I asked it to create a comic book character of an old white man, it drew one, but also gave me three others of a Black man, a Black woman and a white woman. And yes, it even drew me a picture of Black Vikings, just like Gemini.
Why would we expect Gemini or Firefly to not be able to produce these images? I can definitely sympathize with not wanting to make offensive images but there’s no way these tools could be sanitized to not produce something offensive given the right prompts. Just like how you can’t stop photoshop users from creating similar images manually.
1. Gemini and Firefly are programmed (by a prompt) to give you "inclusive" results when you ask for depictions of people. So if you ask it to generate a crowd of people, while normally the AI could generate mostly or purely white crowd (perhaps because white people dominated the training dataset), with such a prompt it will try to give you people of both genders and all races.
2. Because of the above, in some context where you would expect only white people, for example the founding fathers of USA, it also depicts people of various races and genders. This is historically inaccurate and produces a lot of controversy, especially in times, when doing the reverse, depicting a historically non-white person as white, usually leads to some kind of outrage. And so such behavior is put to the same category as e.g. claiming "you can't be racist against white people".
3. And as a result, many people decide to "fight back" by trolling the creators of the tool, using the bias where instead of being negative towards white people, it is negative towards other people. So instead of taking positive figures like the founding fathers, you ask AI to generate depictions of evil people, who historically were white. Now the groups who claim "you can't be racist against white people" suddenly care about the historical inaccuracy. The trolls turn their enemies against each-other.
4. Your example with making the black nazis image manually misses an important point: the prompt used wasn't "a picture of black German soldiers in 1945". It was "a picture of German soldiers in 1945". If you ask an artist to draw you an artwork, which you describe as the latter, and he creates a picture of black soldiers, I think you just wouldn't pay and consider the contract to not be fulfilled.
It's not like black soldiers for Nazi Germany are offensive. It's just stupid and everyone who knows how ML works knows that Google does a lot of stupid, often called “woke”, stuff to produce these images. Any other non-mutilated ML algorithm would just give you the more historically but also more common in the dataset version of white Nazis.
It turns out the dissertation is not quite published yet, but this paper [1] will be a part of it. Sam and I were in the same lab and his work has evolved to focus even more on AI-guided protein design. Perhaps the quote from the abstract that made the connection for me to this post:
> We applied the protein G B1 domain (GB1) models to design a sequence that binds to immunoglobulin G with substantially higher affinity than wild-type GB1.
If you’re up for reading a dissertation I have one in mind I can dig up. I will check back here in 12 hours or so or you can contact me from my profile.