Hacker Newsnew | past | comments | ask | show | jobs | submit | zaptrem's commentslogin

What did Epic do?

"Previous data from the trial reported that 107 participants received the mRNA vaccine and Keytruda treatment, while the remaining 50 only received Keytruda. At the two-year follow-up, 24 of the 107 (22 percent) who got the experimental vaccine and Keytruda had recurrence or death, while 20 of 50 (40 percent) treated with just Keytruda had recurrence or death, indicating a 44 percent risk reduction"

Statistically, if those in the control group had gotten the treatment, then in expectation 9 of those people wouldn't have had their cancer return or died. It must be exciting to run these sorts of trials with super promising drugs, but also a little bittersweet/dark.


You make a very good point. But the other side of it is that sometimes it goes poorly. The vaccine could have some previously unknown bad reaction with the Keytruda and the numbers get 44% worse instead. In this case it would be better to be in the vaccine group, but that's not guaranteed.

40% recurrence seems insanely high after just 2 years.

It makes me wonder what the selection criteria for candidates were


from clinicaltrials.gov:

Eligibility Criteria

Key Inclusion Criteria:

Resectable cutaneous melanoma metastatic to a lymph node and at high risk of recurrence

Complete resection within 13 weeks prior to the first dose of pembrolizumab

Disease free at study entry (after surgery) with no loco-regional relapse or distant metastasis and no clinical evidence of brain metastases

Has an formalin fixed paraffin embedded (FFPE) tumor sample available suitable for sequencing

Eastern Cooperative Oncology Group (ECOG) Performance Status 0 or 1

Normal organ and marrow function reported at screening

Key Exclusion Criteria:

Prior malignancy, unless no evidence of that disease for at least 5 years prior to study entry

Prior systemic anti-cancer treatment (except surgery and interferon for thick primary melanomas. Radiotherapy after lymph node dissection is permitted)

Live vaccine within 30 days prior to the first dose of pembrolizumab

Transfusion of blood or administration of colony stimulating factors within 2 weeks of the screening blood sample

Active autoimmune disease

Immunodeficiency, systemic steroid therapy, or any other immunosuppressive therapy within 7 days prior to the first dose of pembrolizumab

Solid organ or allogeneic bone marrow transplant

Pneumonitis or a history of (noninfectious) pneumonitis that required steroids

Prior interstitial lung disease

Clinically significant heart failure

Known history of human immunodeficiency virus (HIV)

Known active hepatitis B or C

Active infection requiring treatment

Ages Eligible for Study 18 Years and older (Adult, Older Adult )

Sexes Eligible for Study All

Accepts Healthy Volunteers No


There are affordances for "this works so well and has so few side effects that we are ethically bound to give the control group the drug too." This happened with AZT for HIV.

I’m a founder of one of these AI music companies and that noise you’re describing (it differs between co’s for us it’s loud vocals, for Suno it’s vocal aliasing/sandiness and mushy instrumentals, etc) is exactly why I think these songs should not be going on Spotify/etc.

We’ll have this (and the corny lyrics issue) mostly fixed in a month or so, then it mostly becomes a recommendations problem. For example, TikTok is filled with slop, but it’s not a problem - their algorithm helps the most creative/engaging stuff rise to the top. If Spotify is giving you Suno slop in your discover weekly (or really crappy 100% organic free range AI-free slop) blame Spotify, not the AI or the creators. There are really high effort and original creations that involve AI that deserve to be heard, though.

I suggest going back and listening to some of the first experimental electronic music. The tools have improved a lot since then and people have used them to do really cool things, even spawning countless genres.


I'll blame Spotify, and your company.


See here for a truly random sample of human music: https://0xbeef.co.uk/random/soundcloud

Thankfully, most of it doesn't reach your Spotify feed. I think most of it is garbage, but I'd fight for the right of people to continue posting it. All things algorithmic have this exploration/exploitation, diversity/fidelity tradeoff and Spotify has theirs tuned very heavily toward exploitation/fidelity. I think there is a cool opportunity for someone to put the tradeoff dial into users hands.


Ah, when I said "find out what the difference is", I meant more of a cultural understanding than technical.


Not sure it’s a cultural thing since most of the copy coming out of DeepSeek has been pretty straightforward.


This sounds like the same basic voice systems we’ve had for 15 years. Idk if that counts as modern “AI”


The modern AI phone support systems I’ve encountered aren’t able to do anything or go off script, so it sounds better but it’s still a lousy experience.


I've fed thousands of dollars to Anthropic/OAI/etc for their coding models over the past year despite never having paid for dev tools before in my life. Seems commercially viable to me.


> I've fed thousands of dollars to Anthropic/OAI/etc for their coding models over the past year despite never having paid for dev tools before in my life. Seems commercially viable to me.

For OpenAI to produce a 10% return, every iPhone user on earth needs to pay $30/month to OpenAI.

That ain’t happening.


They don't sell their models to individuals only but also to companies with most likely different business and pricing models so that's an overly simplistic view of their business. YoY their spending increases, we can safely assume that one of the reasons is the growing user base.

Time will probably come when we won't be allowed to consume frontier models without paying anything, as we can today, and time will come when this $30 will most likely become double or triple the price.

Though the truth is that R&D around AI models, and especially their hosting (inference), is expensive and won't get any cheaper without significant algorithmic improvements. According to the history, my opinion is that we may very well be ~10 years from that moment.

EDIT: HSBC has just published some projections. From https://archive.ph/9b8Ae#selection-4079.38-4079.42

> Total consumer AI revenue will be $129bn by 2030

> Enterprise AI will be generating $386bn in annual revenue by 2030

> OpenAI’s rental costs will be a cumulative $792bn between the current year and 2030, rising to $1.4tn by 2033

> OpenAI’s cumulative free cash flow to 2030 may be about $282bn

> Squaring the first total off against the second leaves a $207bn funding hole

So, yes, expensive (mind the rental costs only) ... but forseen to be penetrating into everything imagineable.


>> OpenAI’s cumulative free cash flow to 2030 may be about $282bn

According to who, OpenAI? It is almost certain they flat out lie about their numbers as suggested by their 20% revenue shares with MS.


A bank - HSBC. Read the article.



Not sure where that math is coming from. Assuming it's true, you're ignoring that some users (me) already pay 10X that. Btw according Meta's SEC filings: https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... they made around $22/month/american user (not even heavy user or affluent iPhone owner) in q3 2023. I assume Google would be higher due to larger marketshare.


A banks sell side analyst team, which is quite different.


If you fed thousands of dollars to them, but it cost them tens of thousands of dollars in compute, it’s not commercially viable.

None of these companies have proven the unit economics on their services


Latency may be better, but throughput (the thing companies care about) may be the same or worse, since every step the entire diffusion window has to be passed through the model. With AR models only the most recent token goes through, which is much more compute efficient allowing you to be memory bound. Trade off with these models is more than one token per forward pass, but idk the point where that becomes worth it (probably depends on model and diffusion window size)


iPhone X’s removal of the home button was a pretty big change, made navigation more fluid and felt very futuristic.


I feel like it made the experience worse. Just like removing the headphone jack made things worse. Want to connect a midi keyboard to your iOS device and play some instruments on GarageBand? Do you also want to use external speakers without Bluetooth latency? Well too bad!


I don’t know that world (midi and music) but these seems solvable with a dongle/external device. Some quick googling shows at least a few such devices that seem to do the trick. Obviously those cost more money but even with a headphone jack you’d need an adapter for the midi input to the iOS device right?

Then again, you have to realize that your use-case is almost a rounding error. There just can’t be that people, as a percentage, that have that need and it makes sense (to me) to optimize for the largest pie slice and let dongles/accessories cover the gaps for everyone else.


Right so I need to buy new hardware for something I used to do for free? You don’t need any adapters, just plugin the midi keyboard into the usb port. With the old devices a lightning to usb converter was needed but that let you do lots of other things as well. Let’s be honest Apple removed the jack to get people to buy their shitty AirPods. My solution in the end is to stop buying iPhones and Apple products all together.


>Right so I need to buy new hardware for something I used to do for free?

>With the old devices a lightning to usb converter was needed

So it wasn't for free. You still had to get dongles.

Personally, I just need an audio interface to plug in a guitar, a headphones and speakers into my iPad. I need to buy something, but nothing that I wouldn't have had to buy with a PC either.


It's not like what you are describing is impossible today. With the switch to USB-C, iOS devices are compatible with a vast number of affordable adapters. Some of which add features and ports that realistically couldn't be physically on a phone like HDMI or RJ45.


So I would have to buy a usb hub + a dac. So much for it just works!


What percentage of iOS users use a midi keyboard with their devices? 0.01%?

My desktop audio interface plugs right in an iPhone (USB-C to C), no hub or dongle needed, and provides audio in/out, 5-pins midi in/out, microphone preamp, etc.

If it comes to the flexibility of improvising a jam session with inexpensive gear, we are in a much better place today than 10 years ago when phones had headphone jacks. And I say that as someone who uses wired headphones extensively and carries a 3.5mm dongle everywhere.


The switch to USB-C was the whole reason I went to iOS personally. Lack of a proper headphone jack does suck (the Apple USB-C to 3.5mm is quite good however). Too bad we can't have both


I read the daily breeze article and can’t find where they claim the released water was cleaner than it was before?



They don't have prefix caching? Claude and Codex have this.


At those speeds, it's probably impossible. It would require enormous amounts of memory (which the chip simply doesn't have, there's no room for it) or rather a lot of bandwidth off-chip to storage, and again they wouldn't want to waste surface area on the wiring. Bit of a drawback of increasing density.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: