Hacker Newsnew | past | comments | ask | show | jobs | submit | neom's commentslogin

Given we're long evolved, and also tribal based animals, and that culture is an evolutionary pressure feedback mechanism, and prediction is fundamentally useful to our reality, different "thinking styles" (ways to predict/understand outcomes) are useful, aannnd, tribally we used people for their usefulness, I often wonder if "faulty" is the correct lens. That is to say, If prediction variation was useful to tribes, having both 'trust the model' and 'trust the senses' type people, I suppose framing these as disorders rather than trade offs is probably the wrong lens entirely. Society/culture/reality is so narrow and predictable these days, faulty in what context, you know? If you breed 20 generations of "best night watchers", in the jungle at night looking down, quiet, still, dark... you'd probably be selecting for specific traits, and creating new traits, retinal rod density and sensitivity, faster dark adaptation/contrast etc, attention/vigilance traits, pattern detection, anxiety adjacent traits in hypervigilance, prob something about circadian rhythm tolerance etc etc. (https://www.researchgate.net/publication/40886135_Not_By_Gen...)

it becomes a disorder when the person faces "too many" difficulties due to their difference (instead of enjoying the advantages)

and of course there are extreme cases, like the many non-verbal people (who likely wouldn't be able to live alone, their communication is limited to poking at pictures on a board), and the truly end of the spectrum where nothing sort of institutionalization can provide the environment and care necessary for survival

but of course having our society somehow become so narrow allows for the economic efficiency to even have the surplus that then we give to people with these disorders (in the form or care, attention, medical research, and so on)


Yes, having society "somehow" became ordered for a certain "norm" on a spectrum certain does create a disordered reality for the others...

Gotta be careful about them hidden microphones, they could be listening and recording all the keyboard clicks and translating them to the device.

Could be true but if so I'd guess you're off by a generation, us 40 year "old people" are still pretty digital native.

I'd guess it's more a type of cognitive dissonance around caretaker roles.


Funny how many dot-com esq things are popping up.

https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal


The big news story this cycle is that OpenAI secured two large contracts with two separate DRAM companies at the same time, carefully timing the deals so that neither DRAM company knew about the deal with other. Had either company known about the true demand from OpenAI they would have charged a lot more, but each only saw about half.

In other words, there was no collusion between the DRAM manufacturers. They were both caught off guard and left a lot of money on the table.

The current price increase is the result of the huge demand spike. Production takes years to ramp up, but demand has spiked rapidly. Supply and demand.


To those who haven't heard how colossal the size of OpenAIs contracts are.

900,000 wafers monthly. Tom's hardware estimates that is equal to 40% of global dram production capacity.

https://www.tomshardware.com/pc-components/dram/openais-star...


I find it very telling that both Samsung and SK Hynix already stated that they don't plan to expand capacity - officially to prevent overcapacity in the future. It would also be plausable that both doubt OpenAI will follow through with the contract.

Expanding manufacturing capacity takes many years. Memory has historically been a cyclical business with boom and bust periods. It’s reasonable for manufacturers to be cautious about deciding to expand.

If the demand holds I’m sure they’ll expand. Until then, I think they see it as a short term supply spike.


They don't need to expand capacity to fulfill this contract.

They would want to expand capacity if they believed this increase in demand is long lasting - the implication is therefore that they don't believe it, or not enough to risk major capital expenditures.

You saw the same with GPU makers not wanting to expand capacity during the Cryptocurrency boom. They don't want to be left holding the bag when the bubble pops.


Oil has been like that as well. High oil prices don't trigger nearly as much drilling as they used to.

https://www.cbsnews.com/news/oil-production-prices-us-compan...


Part of that equation, FWIW, is that certain countries would flood the market with supply to make any new projects suddenly unprofitable.

Which sucks extra bad because if you shut the project down but start it back up you can't just flip a switch. Gotta put together a whole new team and possibly retrain them.


I sincerely hope that OpenAI goes down in flames with those DRAM contracts are going to the highest bidder then so probably Google or whatever AI competitor.

Honestly having problems remembering the other AI companies without googling it. I recall MS, Facebook, Amazon, Google, and Anthropic.


When OpenAI fails, all the AI projects everywhere else will also be killed. Such is the nature of bubbles. With any luck the farm land where the datacenters were built will be sold back to the farmers for half off and they get a free barn out of the deal.

It's more likely that overcapacity is put to work in a plan B, like cheap cloud virtual desktops. Why spend effort on spying and tracking users when their whole desktop computer is in your data center?

When's the last time you bought something at Sears?

Connected up to the grid and water supplies, I daresay there’ll not end up as barns.

An air conditioned barn with space for 100,000 cattle!

The AI projects that make sense will live. Capitalism is survival of the fittest.

That farm land is dead and gone, best we can do is urban/rural decay.


When the AI bubble pops, it's going to take the good parts of AI out with it.

Capitalism has never been about the survival of the fittest. That's just weird Nietzschean-Libertarian fantasy where someone ends up blaming the lack of truly free markets for their inability to get a date.

Any large building in a rural area will be used as a barn if it has no other useful purpose. It's kind of hilarious when I pass by the old AT&T long lines facility being used as a hay barn.


They wont be going to highest bidders if they are under contract to produce at the claimed level, unless what you are referring to is the residual.

I refuse to read the AI slop that passes for journalism about the contracts OpenAI bought, but if they take physical delivery and open actual datacenters built with the RAM they'll be parted out at the minimum, if not absorbed by another AI provider or Big Tech in general.

"prevent overcapacity" is just a fancy way of saying "we prefer to gouge consumers at little risk to us."

Hopefully the Chinese manufacturers ram(p) up rapidly and spike Hynix and Samsung with heavily undercut prices.


> prevent overcapacity" is just a fancy way of saying "we prefer to gouge consumers at little risk to us."

No it’s not. Memory business has been cyclical for years. Over expansion is a real risk because new manufacturing capacity is very expensive and takes a long time to come online.

If they could make new manufacturing come online quickly they would do it and capture the additional profit of more sales.


If you present an operating profit of €25 billion USD, yes, in a healthy true market competition would force you to either A) eat into your profit margin by reducing prices or B) invest in R&D and capacit-

Actually, let me eat my words, you are right. As I typed this I saw some news from an hour ago[0] about SK Hynix planning to invest about $500 billion into 4 more fabs. I imagine [hope] Samsung will follow, and together with Chinese memory fabs ramping up both in capacity and technology, prices will return to earth in 2027, maybe 2028.

Guess I am just a little too bitter because GPU prices finally seemed to normalize after half a decade of craziness. Topped with corporations in the West usually forgoing investment and using profits like these to do massive stock buybacks and dividends, souring my expectations.

[0]https://www.pcgamer.com/hardware/memory/hot-on-the-heels-of-...


Additional profit? They're making a lot more money right now than if they had more supply.

The risk of overexpansion is real but I really doubt they want to expand much in the next couple years. They don't have to worry about being undercut by small competitors so they can enjoy the moment.


No they are making higher margins, but not getting as much profit as they could have.

Look at the standard Econ 101 supply-demand curve.

If they could make and sell twice as many chips, it would not cut there margins anywhere near half. They would be making much more.

When demand spikes up and down there will be pain. Because booms are not predictable, in timing, size or duration. And accelerating supply expansion is very expensive, slow, and risky.

Many boom prompted RAM supply expansions have ended in years of unprofitable over capacity.


> If they could make and sell twice as many chips, it would not cut there margins anywhere near half. So they would be making much more.

You really think that? I would expect their margins to drop down to a small percentage if they doubled production. Maybe even less.


Price spikes like we are seeing reflect tremendous pent-up/increased demand.

Any price increase reduces purchases by many customers. This tends to keep prices stable. With only small changes in price relative to regular changes in demand.

Yet prices have gone way up.

Which means that many people and businesses are cancelling, delaying, or scaling back their RAM purchases. And yet new demand is incredibly high.

To get prices down, supply would have to grow tremendously. Enough to soak up even more purchases from the very motivated, and to cover all the purchasers that have currently pulled back.


There's room for making more, but I don't think doubling makes sense from a profit point of view.

Especially because the demand curve that's skyrocketing right now is the RAM that isn't in long-term contracts. Doubling all production would much more than double the RAM available for normal purchases.

> To get prices down, supply would have to grow tremendously. Enough to soak up even more purchases from the very motivated, and to cover all the purchasers that have currently pulled back.

Is "down" here back to normal levels?

But normal levels are like a tenth of the profit margin. They'd make significantly less money doing that.


at these prices, there are certainly potential customers not purchasing when they otherwise would have

You don't maximize profit by maximizing sales.

Now thinking of this from the other side, 2 big DRAM producers are taking the risk to dedicate a very big part of their production to AI and if we assume they also have similar deals with other AI companies or big datacenters, what is their risk profile if the AI bubble bursts? Are they viable as companies then ? What is their plan B ?

Their risks are none. They are not increasing capacity, only selling the available one to the highest bidder. Whenever these AI companies run out of money, these producers can simply resume their regular business.

It only depends on whether they get addicted to the high prices.. as long as they can withstand a collapse in the prices then you're right they have minimal risk

Assuming they haven't massively changed operations to crank up supply, which seems to be the case, they shouldn't be massively hurt with a price drop.

If this price goes on for a longer period though, I assume that won't be the case.


Well an example would be if they for example took out massive debts on the back of an expected Revenue stream

Not sure about DRAM companies, but many businesses would still go under if they sold their annual production to a company that then goes bankrupt and won't pay anything for the delivered goods.

Hopefully they get paid more than once a year. Their risk is completely dependent on 1. the net X days until they are paid, and 2. How fast they delay shipment when/if a payment is delayed.


Everything is prepaid, just like you buy RAM online.

> what is their risk profile if the AI bubble bursts?

Exactly. This is why they’re not scrambling to invest in additional capacity. If these memory manufacturers went all in on new capacity it would take years to build out. If the bubble bursts, or even if it doesn’t burst and just tapers off back to normal demand, they would be in a bad position with excess manufacturing capacity that isn’t paying off.


I think the price increases we are seeing are a direct result of the skepticism about AI scale viability. The big dram houses aren’t increasing capacity, due to the risks you mention.

So demand from other sources has to be suppressed through being priced out in order to meet those supply promises made to OAI in ignorance of their true scale.

This is OAI doing suppliers dirty by making economy distorting moves without transparency, intentionally distorting the market in an effort to hurt competitors.

Yet another example of the “free market” creating destruction for the general public.

As a thought experiment, replace “dram” with “rice” or another essential food stock. Market manipulation such as this is wildly irresponsible, anti-humanity and antithetical to public good. Wars are started over less.

This is an excellent example of the actual alignment of OpenAI as an organization. Yet we are to trust them with leading the way in the alignment of our manque oracles of truth and power?


> This is OAI doing suppliers dirty by making economy distorting moves without transparency, intentionally distorting the market in an effort to hurt competitors. Yet another example of the “free market” creating destruction for the general public.

At the speed OpenAI is growing, it's far more likely they're trying to protect themselves first, not harm competitors. The market only exists because it's free / semi free. Were it controlled by statist bureaucrats - which is the sole alternative back in reality - the situation would be drastically worse. Just ask Soviet Russia. You'd get your meager once every ten year DRAM ration and you'd like it.

The general public isn't the standard of morality or good. Invoking it is meaningless.


I think we can dispense with the strawman Soviet Russia alternative lmfao.

In a reasonably well regulated market, deception at that scale (that utterly destroys competitive buildouts by externalizing the costs that normally would be borne by a customer needing an exceptional order) would be a clear violation of market laws. The fact that deceptive, aggressively anticompetitive behavior such as this , blatantly harmful to other innovation passes as “free market” is a laughable assertion… this is merely the will of the stronger, not any reasonable definition of a free market. A free market implies transparency in pricing and demand, alongside fair competition practices.

Anyone else planning to innovate in the ML space just took a huge hit thanks to OAI, including scientists, pharmaceutical companies, and other things that arguably operate mostly in the realm of clear public good.

Their inherent assumption that might = right is a very powerful indication of their inability to be trusted in the control of a tool / weapon that has more potential to steer the future of humanity than nuclear power/weapons ever did. It’s clear that A: they don’t see AI as any big deal, or B: they don’t care how their actions affect humanity in any nuanced sense of the concept.


> The big dram houses aren’t increasing capacity, due to the risks you mention.

Except they are

> SK hynix to boost DRAM production by a huge 8x in 2026, still won't be enough for RAM shortages

> It's also not just SK hynix that is boosting DRAM production capacity, with both Samsung and Micron rapidly increasing their respective DRAM production numbers.

https://www.tweaktown.com/news/109011/sk-hynix-to-boost-dram...


Note to self don't trust tweak town.

That's such an impossibly big number for that timeline. The actual news is they're ramping up their newest node, which they were doing anyway, and which was a small percent of their total production.


Is this new capacity or will some kind of other chip type suffer?

lol 8x in 2026 hahahahahahaha that is one of the funniest things I’ve ever heard of coming from a semiconductor manufacturer. Maybe 8x as many of something they weren’t selling beforehand, but increasing production on full fabs by 8x? I’d love to be wrong but this makes zero sense to me.

This also creates massive risk.

Financial trouble at OpenAI - even minor stuff where they slow purchases by 25% - could have a big impact on global prices.


At that point even if the AI bubble bursts you have a solid business as a RAM scalper

You mean OpenAI will profit selling their RAM stocks if the AI bubble bursts? I doubt it honestly. If the AI bubble bursts, then global demand will collapse altogether crashing the value of HW.

Free GPUs for everyone! Bring a truck!

I honestly can't wait to use used gpus as lego bricks similar to what kids in Weimar republic did with cash

My self-training robot army is poised to conquer all. It's merely waiting for parts.

Never gonna happen. Cash has no intrinsic value exccept maybe for use in fire / toiler paper. GPUs while currently inflated in price will always find enough value. Their price might go down 50-75% but never 99%

I've got an S3 VGA adapter to sell you at 25% of list price

They're taking about shorter time scales, the effect on top of the normal obsolescence treadmill.

> GPUs while currently inflated in price will always find enough value.

What is the instrinsic value of one of millions of GPUs, if the world only needs 15-20% of them?


About fifty teraflops.

That's the meaning of intrinsic calor - the device can do what it can do, regardless of market conditions. Today it has the value of fifty teraflops, and tomorrow it still does, unless it breaks. However, intrinsic value cannot be measured in dollars.


"calor" was a typo of "value" obviously.

And yet we're talking about electronics here, they don't have sentimental value and just because compute capacity is unused there are no guarantees that it will be used, even at a per unit cost approaching €0.

I'm sure that farmers during the Great Depression were also consoling themselves with the "intrisinsic caloric value" of their corn.


Demand for GPU power is much more elastic than demand for food calories.

Food calories are cheaper to convert into something useful. It's not like GPUs, once bought for peanuts, turn into perpetual motion machines. They need power, cooling, a whole infrastructure built around them.

GPUs would have taken the world by storm already in the roughly 30 years since they've been around.

Even for GenAI it's likely ASICs take over at some point if we really care about performance.


GPUs have taken the world by storm. There's one in almost every computer, and they make up the bulk of supercomputers!

If you put a 75% discount on these powerful GPUs there will be a long line of non-AI-company purchasers.


GPUs used to cost 20% of what they cost know and Intel and AMD make perfectly serviceable GPUs for most PCs. NVIDIA top of the line GPUs won't suddenly be plugged in to lowly laptops.

Yes, lots of companies will buy them for cheap, but these AI beasts also have OpEx costs. Not every alternative use is worth the money and there are 0 guarantees that the alternative costs cover the gap. NVIDIA sell 80% of GPUs for AI now.

I think people don't realize just how big this bubble is.


As I said, the intrinsic value of a GPU is not measured in €. In fact, the lower the sale price gets, the better a deal it is, not worse - you get the same intrinsic value for less extrinsic cost.

There are also intrinsic costs, mostly power consumption.


The most likely outcome is kids in third world countries extracting the valuable metals from piles of discarded GPUs.

After GPU crypto mining became unprofitable Chinese manufacturers took "mining only" cards, desoldered the GPU and built new graphics cards using the chips. So at least the lower end stuff (RTX6000) could be repurposed like that.

Soon you will bring a wheelbarrow of GPUs to buy a loaf of bread.

If AI bubble bursts -> global market crash. Ehm, with what you will be scalping, with bottle caps fallout money? :D

I don't really buy that AI crashing will cause "the global market" to crash. How exactly do you think that works?

I find it very hard to believe that two South Korea based Chaebol memory producers would have no wind of this from the other.. more likely is that they both agree that it is not in their or the nation's interest to take on too much of a risk profile... while also balancing the risk of appearing to be colluding.

I wonder if they caught wind, but they both thought they were competing for the same contract.

Another way of making the point I'm making is you don't become one of the two leading memory producing capable companies in the world by being naive

It's more advantageous long term for them to be oblivious to it. Ultimately gives them what they want which is reduced supply and increased pricing for them.

One difference that strikes me with the .com bubble is that I don't remember the .com companies having sustained multi-billions losses / cash burn. They were not profitable but this is quite different. If (or when) the music stops, won't OpenAI go bust immediately? That's quite a counterparty risk those companies are taking.

Dotcom companies were basically stuff like pets.com, not things considered strategic with de-facto government backing.

Cisco was very much not "stuff like pets.com". Most of the money lost in the dotcom crash wasn't in pets.com, it was the infrastructure companies like Cisco and Sun.

Funnily enough Cisco’s stock has recently recovered back to its dotcom peak.

Thanks to the irrational exuberance of the AI bubble...

Cisco did not go bust.

And I didn't say it went bust.

I believe we'd have to tease out what proportion of that cash burn is essential to keep serving compute to customers (which I assume to have profitable unit economics), versus what percentage is optional datacenter buildouts that could be paused in this situation.

OpenAI is ten years old, dotcom companies were 2-3 years old.

Some dotcom-boom companies that survived also had sustained multi billion dollar losses afair - Amazon and Uber for example.


Uber was founded half a decade after the dot com bubble.

"half" confused me but okay 2001 and 2009 makes sense.

Massive cash burn was an absolutely key feature of the dotcom boom/bust. Admittedly, it never really went away - there's always been a free->enshittification profit taking cycle since then. It's just the scale that's terrifyingly different this time.

They are counting on 'too big to fail'.

And when demand eventually crashes, all the new production capacity is left without buyers to sell to, so maybe it does not even make sense to create it.

There won't be any new production capacity until DDR6 comes out. There is no point in investing into an obsolete technology like DDR5.

China will happily take that business. They are producing DRAM now arent they?


unfortunately not too promising: https://www.reuters.com/commentary/breakingviews/chinas-chip...

I wish they didn't stop producing DDR4, at least they'd be the sole producer of that.


You have to wonder whether they realy intend to use the ram, or they just spotted an opportunity to corner an essential market and choke/extort in resale.

I feel like what OpenAI has started to do was to accumulate as much compute resources as possible just so no one else has them.

> Right now it seems like these wafers will just be stockpiled in warehouses – like a kid who hides the toybox because they’re afraid nobody wants to play with them, and thus selfishly feels nobody but them should get the toys!

https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...


I'm wondering this as well. Buying 40% of global production just sounds too much. What kind of user counts would they require for that much compute to pay off? Billions of people? What's the chance they could actually get that many users and charge them money? Zero?

Do they have the extra money to play games like that? I thought they were pulling back on ads because of the community reaction even though they wouldn't be introducing it now unless they needed it.

Money is like energy. You can flow it around a circuit if resistance is low.

They are not going to repeat the Windows 8 debacle again are they? (Over producing in response to perceived demand only to not get the demand when they are ready to sell) Didn't that bankrupt some memory manufacturers?

> Supply and demand.

Fake demand - fueled by ponzi debt financed by other debt.

Oh, forgot to mention - fake supply too. China was doubly-excluded from the RAM market by using both equipment export bans and tariffs - ensuring the supply is frozen solid.

I'm pretty sure this isn't a coincidence or due to incompetence.


I was just talking to JChris Anderson about Strudel last week, he forked it, adding "snaps" where users can snapshot their work allowing for the creation of multi-layered songs, added a "vibe" tab so anyone can easily update the code with pompts, and a few other changes.

Here's the fork on GitHub: https://github.com/VibesDIY/strudel

Here's a preview of what it would look like when merged: https://strudel.use-vibes.com/

Here he is playing around with the preview: https://www.youtube.com/watch?v=0oJhnkWDafM


It's a bit annoying that he forked it back to github, when strudel was purposefully moved to codeberg for ethical reasons.

Testing at these labs training big models must be wild, it must be so much work to train a "soul" into a model, run it in a lot of scenarios, the venn between the system prompts etc, see what works and what doesn't... I suppose try to guess what in the "soul source" is creating what effects as the plinko machine does it's thing, going back and doing that over and over... seems like it would be exciting and fun work but I wonder how much of this is still art vs science?

It's fun to see these little peaks into that world, as it implies to me they are getting really quite sophisticated about how these automatons are architected.


The answer is "yes". To be really really good at training AIs, you need everyone.

Empirical scientists with good methodology who can set up good tests and benchmarks to make sure everyone else isn't flying blind. ML practitioners who can propose, implement and excruciatingly debug tweaks and new methods, and aren't afraid of seeing 9.5 out of 10 their approaches fail. Mechanistic interpretability researchers who can peer into model internals, figure out the practical limits and get rare but valuable glimpses of how LLMs do what they do. Data curation teams who select what data sources will be used for pre-training and SFT, what new data will be created or acquired and then fed into the training pipeline. Low level GPU specialists that can set up the infrastructure for the training runs and make sure that "works on my scale (3B test run)" doesn't go to shreds when you try a frontier scale LLM. AI-whisperers, mad but not too mad, who have experience with AIs, possess good intuitions about actual AI behavior, can spot odd behavioral changes, can get AIs to do what they want them to do, and can translate that strange knowledge to capabilities improved or pitfalls avoided.

Very few AI teams have all of that, let alone in good balance. But some try. Anthropic tries.


The most detail I've seen of this process is still from OpenAI's postmortem on their sycophantic GPT-4o update: https://openai.com/index/expanding-on-sycophancy/

I hadn't seen this, thanks for sharing. So basically the reward of the model was to reward the user, and the user used the model to "reward" itself (the user).

Being generous, they poorly implemented/understood how the reward mechanisms abstract and instantiated out to the user such that they become a compounding loop, my understanding was it became particularly true in very long lived conversations.

This makes me want a transparency requirement on how the reward mechanisms in the model I am using at any given moment are considered by whoever built it, so I, the user can consider them also, maybe there is some nuance in "building a safe model" vs "building a model the user can understand the risks around"? Interesting stuff! As always, thanks for publishing very digestible information Simon.


It's not just OpenAI's fuckup with the specific training method - although yes, training on raw user feedback is spectacularly dumb, and it's something even the teams at CharacterAI learned the hard way at least a year before OpenAI shoot its foot off with the same genius idea.

It's also a bit of a failure to understand that many LLM behaviors are self-reinforcing across context, and keep tabs on that.

When the AI sees its past behavior, that shapes its future behavior. If an AI sees "I'm doing X", it may also see that as "I should be doing X more". And at long enough contexts, this can drastically change AI behavior. Small random deviations can build up to crushing behavioral differences.

And if AI has a strong innate bias - like a sycophancy bias? Oh boy.

This applies to many things, some of which we care about (errors, hallucinations, unsafe behavior) and some of which we don't (specific formatting, message length, terminology and word choices).


Birds and planes operate using somewhat different mechanics, but they do both achieve flight.

Birds and planes are very similar other than the propulsion and landing gear, and construction materials. Maybe bird vs helicopter, or bird vs rocket.

> other than the propulsion and landing gear, and construction materials

"Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system and public health, what have the Romans ever done for us?" Monty Python's Life of Brian.

P.S. It is relative, but quite a lot of differences IMHO.


Yes indeed, but they still use wings, fly in the air and so on.

Artificial neural networks have very little in common with real brains and have no structural or functional similarities besides "they process information, and they have things called neurons". They can perform some of the same tasks though, like how a quadcopter can perform some of the duties as a homing pigeon.


This article lead me to an interesting exchange about vitiligo, thought I'd post it here for posterity:

Me - Nov 28, 2025, 10:51 AM:

Hey Dr. Dadachova, there was an article in the bbc today about radiotrophic fungus, I did some reading on them and it got me thinking about vitiligo. Current dermatology literature focuses on the immune destruction of melanocytes, often citing "oxidative stress" as a cause. However, there seems to be a total absence of data regarding the physical structure of the melanin polymer itself in these patients.

re: your findings that melanin’s electronic structure (EPR signal) changes under stress to become "protective/radiotrophic," I am wondering if the inverse mechanism could be driving vitiligo, where the melanin polymer is structurally defective (acting as a pro-oxidant "leaky capacitor") rather than a protective shield?

To your knowledge, has anyone ever applied the EPR techniques used in your fungal research to analyze melanin isolated from the active border of Vitiligo lesions? It seems plausible that a structural defect in the polymer physics could be the upstream trigger for the autoimmunity, similar to the "toxic melanin" theories in Parkinson’s disease.

I realized this sits at the exact boundary of your expertise in melanin physics and clinical pathology, and I was curious if you had ever explored this link??

tnx for reading! have a great weekend! :)

j.

--

Dadachova, Kate Fri, Nov 28, 9:57 PM to me

Hello John,

Thank you for your message and interest in melanin work! We have never looked at melanin in Vitiligo lesions but I think that your hypothesis about defective melanin could be correct. I know that there are studies showing absence of EPR signal in Vitiligo, and, on the contrary, enhanced melanin signal in melanoma in comparison with benign nevi. Probably an interesting study for a pathologist to perform!

Best regards,

Kate


Blog posts like this make me think model adoption and appropriate use case for the model is...lumpy at best. Every time I read something like it I wonder what tools they are using and how? Modern systems are not raw transformers. A raw transformer will “always output something,” they're right, but nobody deploys naked transformers. This is like claiming CPUs can’t do long division because the ALU doesn’t natively understand decimals. Also, a model is stat aprox trained on the empirical distribution of human knowledge work. It is not trying to compute the exact solution to NP complete problems? Nature does not require worst case complexity, real world cognitive tasks are not worst case NP hardness instances...


Those are indeed some very nice photos, though it is clear that a couple of them were made by aliens.

This modern day chauvinism needs to die.

Ancient peoples were fully as intelligent as us.

Maybe even smarter as there was no lead poisoning their brains!


> "Maybe even smarter as there was no lead poisoning their brains!"

It's a good guess the people who made these artifacts (the bronze ones particularly) suffered from lead poisoning: lead was a primary alloying metal for bronze. You can even look up elemental analysis for BMAC bronze artifacts specifically: "...contain appreciable amounts of arsenic (up to 3%) and lead (up to 4%), as did bronzes of the preceding chronological horizons"[0].

The early smelting techniques simply released everything into the open atmosphere, as fine particulate fumes. Environmental samples going back 5,200 years show regional-scale lead pollution[1] from Bronze Age metals smelting.

[0] https://www.frontiersin.org/journals/earth-science/articles/... (under "3.1.3 Bronzes of the Late Bronze Age II")

[1] https://www.nature.com/articles/s43247-024-01921-7

("The smelting- and cupellation-related release of Pb into the environment is predominantly via the fine-particle fraction and, as such subject to large-scale atmospheric transport, resulting in a supra-regional to hemisphere-wide distribution9,10,11,20,21,22,23")


Sorry if you were offended, I was just making a joke. I don’t believe the ancient aliens theories, but a lot of people do, and that’s what I was poking fun at.

They didn't know about equality, bacteria, electromagnetism, fallibilism, evolution ... so you must mean a kind of "fully intelligent" that includes extremely ignorant people with bad ideas.

You didn’t know about those either. You were taught it by someone else, who learned about it from someone else, and so on. Sure some people discovered things along the way but you specifically don’t get credit for their progress. Does that make you ignorant? What about all the things that those people did discover or invent - surely you can see how the progress they made at that time, with so few resources and advancements, was truly revolutionary. Some of those advancements were far harder and significant than the stuff we like to point at in modern times like rockets.

Credit? Screw credit, that's not what I'm talking about. By accident, good ideas wander into our minds and make us smart. OK, there's some amount of positive feedback in this process (ideas about how to accumulate more good ideas). But "ignorance" means being uninformed, that is, not lucky enough to be inhabited by many of these good ideas in the first place. And there's a lot more of them floating around in modern times, and so it's harder to be ignorant, and easier to be lucky, and well-informed, and since ideas help with being a smarty-pants, it's easier to stumble into being smart. Thus ancient people were stupid, in a manner of speaking.

While they may not have known many things we know today, they had a better grasp of masonry, pottery, and metallurgy than most people today. Likewise, these are people who understood human experience quite well, and understood the animals and plants around them better than most of us today.

Regarding sanitation, there is evidence that they understood the corruption of the flesh and many Bronze Age cultures had topical treatments that were quite effective antiseptics. So, while not understanding what bacteria are, they still knew the effect.


Some modern ideas are about thinking.

And many of those ideas are quite old. People have been dealing with their own minds for quite some time, and the past had far fewer distractions from facing one’s self. Things like mindfulness, CBT, theory of mind, and most philosophy are built upon quite ancient traditions, observations, and beliefs.

Some modern ideas about thinking are modern.

How about: ancient people had brains that were physically similar to anyone modern, and sometimes they came up with one or two good ideas, but they were generally poorly informed and full of misconceptions by modern standards.


I’d quibble about the tone with “one or two good ideas” but with the general meaning, I wouldn’t disagree.

You don't speak Cantonese.

How can you possibly call yourself an intelligent person if you cannot speak Cantonese?


Well, Cantonese is a bad idea anyway.

(I don't like tonal languages because they interfere with tone of voice, and Cantonese has extra tones.)

Being able to read Chinese could be advantageous, and then I'd be less of an idiot, it's true.


I also prefer languages that are comfortable with being disliked.

Huh? Knowledge/education and intelligence aren’t equivalent. Is English your first language? Seems a very basic error to make otherwise.

That's fine, I was just confirming that that was what you meant by intelligence.

It's somewhat different from "smart", isn't it? Since it includes everyone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: