Hacker Newsnew | past | comments | ask | show | jobs | submit | SergeAx's commentslogin

My desktop PC has been constantly evolving for about 15 years now, literally the ship of Theseus. Currently, following the latest upgrade in September, it's an absolute powerhouse.

During the same time, I burned through four laptops (for travel purposes), all of which were mostly weak, with a maximum of 16 GB RAM, no real GPUs, and 14"-15" screens, expensive, and had poor resale value.

My next travel computer will be Framework 16.


Is this a common occurrence in the US? It sounds worse than tipping culture.

Nah just Vegas being Vegas. The whole area is designed to squeeze every dollar out of you.

Good to know, in some bizarre way. Thank you!

What is your usage scenario for this device? It's $400 and 3/4 kg.

I bought that specific model to provide connectivity for our robotics team’s pit computers. For this need, good antenna performance is key, since different venues differ wildly in WiFi and cell coverage and when we setup the evening before comps, I want the best chance of getting a solid connection and offering it to the pit LAN.

But now that I have it, the device is handy for family travel as well. Put an unlimited data eSIM in the device and everyone has “unlimited” data n the road and when we arrive at a hotel or AirBnB, one person signs it on to wifi and everyone is connected, including tailscale connections to home.

If I was doing personal and work travel only, I’d look for a smaller unit, but still with a decent battery.


According to their website, it weighs 761g.

Right, 3/4 kg is 750 g.

Oh wow, I got completely confused by this usage, and thought it meant 3 to 4 kilograms :)

I will use ¾ next time)

Interestingly, both of your points could be addressed by adopting a company-wide policy: if a meeting has no agenda attached, it is optional to join. Or, in short, "no agenda - no attenda."


I am and was in touch with several multimillionaires and billionaires, and in no way are they "regular people". One common trait is a gross intolerance for failing to execute their plans. I am not saying that it is necessary bad, but the amount of resources they may throw on their dissatisfaction is often frightening.


Is it really possible to control file locations on HDD via Windows NTFS API?


No, not at all. But by putting every asset a level (for example) needs in the same file, you can pretty much guarantee you can read it all sequentially without additional seeks.

That does force you to duplicate some assets a lot. It's also more important the slower your seeks are. This technique is perfect for disc media, since it has a fixed physical size (so wasting space on it is irrelevant) and slow seeks.


> by putting every asset a level (for example) needs in the same file, you can pretty much guarantee you can read it all sequentially

I'd love to see it analysed. Specifically, the average number of nonseq jumps vs overall size of the level. I'm sure you could avoid jumps within megabytes. But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower. This paper effectively says that if you have long files, there's almost guaranteed gaps https://dfrws.org/wp-content/uploads/2021/01/2021_APAC_paper... so at that point, you may be better off preallocating the individual does where eating the cost of switching between them.


From that paper, table 4, large files had an average # of fragments around 100, but a median of 4 fragments. A handful of fragments for a 1 GB level file is probably a lot less seeking than reading 1 GB of data out of a 20 GB aggregated asset database.

But it also depends on how the assets are organized, you can probably group the level specific assets into a sequential section, and maybe shared assets could be somewhat grouped so related assets are sequential.


Sure. I’ve seen people that do packaging for games measure various techniques for hard disks typical of the time, maybe a decade ago. It was definitely worth it then to duplicate some assets to avoid seeks.

Nowadays? No. Even those with hard disks will have lots more RAM and thus disk cache. And you are even guaranteed SSDs on consoles. I think in general no one tries this technique anymore.


> But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower.

By default, Windows automatically defragments filesystems weekly if necessary. It can be configured in the "defragment and optimize drives" dialog.


Not 'full' de-fragmentation, Microsoft labs did a study and after 64MB slabs of contiguous files you don't gain much so they don't care about getting gigabytes fully defragmented.

https://web.archive.org/web/20100529025623/http://blogs.tech...

old article on the process


> But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower

Someone installing a 150GB game sure do have 150GB+ of free space and there would be a lot of continuous free space.


It's an optimistic optimization so it doesn't really matter if the large blobs get broken up. The idea is that it's still better than 100k small files.


Not really. But when you write a large file at once (like with an installer), you'll tend to get a good amount of sequential allocation (unless your free space is highly fragmented). If you load that large file sequentially, you benefit from drive read ahead and OS read ahead --- when the file is fragmented, the OS will issue speculative reads for the next fragment automatically and hide some of the latency.

If you break it up into smaller files, those are likely to be allocated all over the disk; plus you'll have delays on reading because windows defender makes opening files slow. If you have a single large file that contains all resources, even if that file is mostly sequential, there will be sections that you don't need, and read ahead cache may work against you, as it will tend to read things you don't need.


They downloaded 43 GB instead of 152 GB, according to SteamDB: https://steamdb.info/app/553850/depots/ Now it is 20 GB => 21 GB.


Oracle lawyers want you to think so.


Ahem, Temurin/OpenJDK disagree


You mean the company that 100% open-sourced Java and made the open-source (same license as Linux) OpenJDK the reference implementation?


> It’s virtually impossible for me to estimate how long it will take to fix a bug, until the job is done.

I think what they mean is that after 2 days of working on bug you stop it regardless the result, leaving paper trail behind for the next person.


I wonder, would it be funny if it turned out that this technique dramatically increases the effectiveness of any prompt? Not by 10-15%, as in "I'll give you a big tip" or "If you do this task poorly, I'll get fired," but by three times?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: