Hacker Newsnew | past | comments | ask | show | jobs | submit | Rohansi's commentslogin

The RP2350 (Pico 2) supports external PSRAM so you can add several MB more there if you need it.

I have always wondered kind of bandwidth you could make by multiple channels of PSRAM driven by PIO/DMA. Individually they're not so speedy(although the APS6408L-OCH-BA seems pretty crazy) , but how many can you run simultaneously. In terms of the RP2350 it would be fascinating to see how many times a second could you replace the entire contents of SRAM.

Can surely get more bandwidth but the it's less useful because it will not be directly accessible by the CPU anymore.

I believe PSRAM is directly accessible by CPU (there is even cache for PSRAM). Or am I missing something?

Lerc was referring to driving multiple PSRAM chips from a single Pico with PIO/DMA. The CPU will only work with a single PSRAM chip.

Yes, I was thinking of it more like bank switching.

Although, going back to the start of the thread where the suggestion was adding more RAM to future chips perhaps the request could be for support for multiple channels in the future.

It;s the age old question of parallel Vs serial Vs multi channel serial.


PSRAM is a possibility that I have explored for offloading the delay line buffers, which occupy quite a significant chunk of SRAM at the moment. It should be fast enough.

Default specs matter a lot for worldwide availability and affordability, as well as for the willingness of people to spend a lot of time creating free software for it.

520KB of SRAM is actually on the high end for microcontrollers. It doesn't seem like much but SRAM is on-die and significantly lower density than DRAM. For comparison, it's the same type of memory used for CPU caches, which are also small!

You can easily find dev boards with 8MB of PSRAM online if you need it. Or you can buy the PSRAM and hook it up yourself. If you still need more memory than that then you're looking at the wrong chip for the job.


I shouldn’t wish for more onboard RAM for the next Pico iteration because other boards already have more. Okay, dude.

Other boards with the same MCU as the Pico, yes. The only downside is you lose one GPIO.

> This is particularly frustrating if functions are sometimes async, like lazy loaders or similar cache things.

This is a solved problem in C#. You can use ValueTask<T> instead of Task<T> and no promise will be allocated if it never awaits.


Not natively. It's not a supported architecture and lacks an MMU. Those who have run Linux on it have done so through a RISC-V emulator or similar.

Hmm... I don't have an ESP32-S3 to test, but looking at one of the esp32s3 linux builds online[1], the binaries are compiled for the xtensa arch. So it does seem to run natively rather than through some kind of emulation. Linux's source does have an arch/xtensa/ directory, so that arch seems to be supported on some level by the kernel. ESP32-S3's docs also mention having an MMU[2], though it's possible it's not sufficiently featureful.

[1] https://github.com/hpsaturn/esp32s3-linux/releases/tag/0.0.4

[2] https://docs.espressif.com/projects/esp-idf/en/stable/esp32s...


Interesting! I've only seen the older projects which probably predate the ESP32-S3. I wonder how usable this is in reality though because you typically can only buy modules with up to 8MB of PSRAM, so you'd have to swap the PSRAM out for higher capacity or manufacture a custom board.

How does soldering memory help reduce sleep power consumption vs. using a socket? What is different other than how they are physically connected to the board?

It's not the form factor itself that is the problem. LPDDR is more efficient for various reasons and cannot be on a DIMM. It physically will not work with a socket. That is the problem that LP-CAMM solves: LPDDR but still removable.

You did not answer the question.

Did I not? I'm trying my best here. The question is sort of off-target, though. What I am trying to say is: 1) DDR uses more power than LPDDR; 2) LPDDR cannot work on a DIMM socket, because of its lower voltage signals, and other reasons; 3) SO-DIMMs always contain the higher power DDR; QED) if you insist on SO-DIMMs, then you have to spend more energy.

Rohansi was basically asking 'why', you keep on reiterating that DDR uses more power than LPDDR, but fail to answer why this is the case. Is it clock speed? Is it voltage? Is it a protocol/specification difference? 'various reasons' is not an answer.

LPDDR is very different from DDR so I don't really feel like diving into it in this tiny box. It has its own oscillators so the CPU doesn't have to clock it while asleep; it adaptively refreshes less often according to temperature; during self-refresh the cells are charged to a lower voltage that wouldn't really work for high-speed I/O but works fine for retention.

There is no physics based reason why it couldn't work. If the industry really wanted to do it they could. But they don't. The primary reason is LPDDR just has too many pins. A DDR5 SODIMM has 262 pins and is an unwieldy beast. LPDDR5 has 644 pins.

LPCAMM2 really shows the trade-offs. It adds a lot of bulk and cost, and repairability hasn't been valued high enough by the market to cover that overhead for most consumers. That's why Micron exited the market they played a big part in founding.

https://www.ifixit.com/News/95078/lpcamm2-memory-is-finally-...


Assuming you're referring to Apple Silicon's memory bandwidth, that is not necessarily because the memory is on-die. The bandwidth comes from having more channels to access memory. This gives the SoC a wider bus to increase throughput vs. your typical x86 system with two channels. For whatever reasons Intel/AMD decided that two channels is all the typical consumer chips can support now so it's on them.

Ah I see, thanks for breaking it down.

On laptops right? Weve seen more channels for years elsewhere

Yes, on laptops, but even on most desktops now too. Higher number of channels is getting more limited to server systems.

There is no VRAM in this laptop so how is it not unified? The CPU and GPU both share the same memory.

Lets say you were making an app and had to decide between native or PWA. You don't need much more than push notifications so there shouldn't be a big difference between the two. You do your research and find that you can either have:

a) Native app: publish to App Store, make links on your website directly open the App Store page where the user can install your app

b) PWA: your app is usable directly on your website, but push notifications don't work unless your users add the page to their home screen. You can't have a button on your website to install it - you must instruct the user to navigate some Safari menus to find an option which is hidden *six* taps away

Do you think b) is a viable option? I don't, and I'm sure that's one of the reasons the tech industry is anti-PWA. It also doesn't help that Apple drags its feet with supporting new standards (web push only supported in iOS Safari from 2023, but 2016 in other browsers).


Native apps can't send push notifications without asking for permission on iOS either. If all you need is push notifications, that shouldn't be your deciding factor.

I didn't even cover push notification permission in my comment. The PWA also needs to ask for permission to send push notifications after the user adds it to their home screen, so they're the same.

> If all you need is push notifications, that shouldn't be your deciding factor.

I'm dealing with this right now! Rewriting a React Native app as a PWA because reasons. Current plan is to ship it as a WebView app to avoid Apple's PWA installation hurdle. Everyone else will be able to easily install it as a PWA.


> Wanting a website to access your USB stack directly (or Bluetooth, which has a similar standard) is such an extremely niche use case that it’s probably better for it to be available only as an opt-in extension.

It doesn't give direct access. You go through the browser which restricts what you can use it to touch (eg. can't access USB drives). The user also needs to choose which USB device to allow access to before you can do anything.

> More of the enormous bloated JS web API specs should be implemented as browser plugins.

Then you'll get one of two outcomes: 1. Users install extensions without caring about what they do. I don't see why we should train people to install more extensions when there are already a lot of malicious extensions! 2. Hardware manufacturers decide to not adopt these standards and continue shipping executables for everything, which are not sandboxed at all and don't support all platforms


I think there is still a problem.

Let me give a concrete example. Hardware "passkeys" - FIDO2 authenticators - are designed such that their credentials are bound to a particular Relying Party (web site). Browsers enforce this by sending the current web domain the user is on to the authenticator when Javascript tries to list, create, or use a credential.

This would be completely broken if Javascript talked directory to a FIDO2 USB device, because the JS could send a Relying Party that is NOT the web site on which the user currently lands.

So Chrome blocks WebUSB from communicating with USB devices whose USB HID descriptor "looks like" a FIDO one, by some hardcoded "not this device" blacklist code in Chrome itself.

But what if what you have connected to your computer is a USB NFC card reader, and the user taps their FIDO authenticator on that? Letting the Javascript communicate directly with the card reader breaks the FIDO security model exactly the same way... but Chrome allows it!

The problem with WebUSB is that it exposes devices that were built under the threat model that only trusted code would be able to access them to untrusted code. The set of devices acceptable for WebUSB use should have been a whitelist instead of a blacklist to be secure. Letting the user choose the device to grant access doesn't solve the problem, because the user doesn't have a way to understand what will happen when the site is granted access, per the FIDO example I gave above.


> But what if what you have connected to your computer is a USB NFC card reader, and the user taps their FIDO authenticator on that?

So the user would need to: 1. Keep the malicious page open, or install a malicious extension 2. Grant access to the card reader from a list of USB devices 3. Then tap their card on that reader

IMO a bad actor is going to have more success getting people to run an executable they made the browser download. There's only so much you can do to protect people from themselves. Not everyone needs software to be locked down like a padded room.

> The problem with WebUSB is that it exposes devices that were built under the threat model that only trusted code would be able to access them to untrusted code.

Which platforms have USB devices locked down to "trusted code" only?


That's a great way to kill adoption of a feature. But what has WebUSB done to you?

Existing. HDR is also on the list.

Bake ads into them.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: