Hacker News new | past | comments | ask | show | jobs | submit | AGoodName's comments login

Yeah see Singapore more or less.

>The PAP has been returned to power in every general election and has thus formed the Cabinet since 1959

https://en.wikipedia.org/wiki/Government_of_Singapore


We're on a site right now that hides like counts for these reasons :D

https://news.ycombinator.com/item?id=2595605


Oh wow, that's really on point. I'd be cool if the front page would be like that as well for a week to see how it would be different. (pg if you're seeing this.)


The sort order for the frontpage is already determined by a secret score that is only partly related to upvotes.


I'm glad they stuck with hiding the like counts.


This article correctly states that committed memory is that in use + memory that's being paged out. Now why would you want to know the committed memory over the actual physical RAM in use?

I can trivially create an app that memory maps a massive file and will show several GB of committed memory. This won't be in use of course, memory mapping files so that the OS will page in/out as required is intentional. Those GB of committed memory aren't something you should care about. I'd be scared if someone looked at the committed memory use of a program that correctly uses mmap and caused someone to exclaim "OMG this is uses TB of RAM!".

Task Manager is doing the right thing here. It's showing you want's actually paged in and in use right now.


OOOOOOO, something I actually know!

> Now why would you want to know the committed memory over the actual physical RAM in use?

Because in Windows, committed size is relative to physical size. You can commit a lot more than RAM, but watch your page-file grow.

malloc() can fail on Windows for this reason. This is not the same on Linux or any of the BSD's I've tried. :)

I experienced/discovered this August last year.. sometimes understanding a lot about Linux can make you blind to the architectural differences Windows has.

I wrote some cross-platform CPP to show it[0]

[0]: https://gist.github.com/dijit/cb2caa1a40d48e03613f5af0e518d6...


> This is not the same on Linux

Linux lets you choose an overcommit policy. https://www.kernel.org/doc/Documentation/vm/overcommit-accou...


You can choose an overcommit policy on Linux, but most library developers on Linux have chosen the default and regularly allocate wide swaths they don't intend to use.

This is a real pain when moving an application from FreeBSD to Linux, as effective limits on memory are lost (ulimit set at ~90% of ram results in a malloc failure and a clean crashdump rather than death by thrashing, or an untrapable oom kill).

There could maybe be a middle ground where malloc would allocate large chunks of address space for ease of administration, and then ask the OS to commit those pages in smaller chunks as needed. Often, there's not much a lot you can do when allocation fails, but it's way more actionable if the failure is returned from a syscall vs failing when you write to an unbacked page, which could happen basically anywhere in your program.


This doesn't occur if you memory map a file though (barring certain flags that you can set as stated by quotemstr below)

You can legitimately have Windows stating many GB's of committed RAM without actually using that RAM and it's not using the systems pagefile/swap. It's also common for this to occur. Pretty much every program capable of opening large files (GB+) in a non-sequential fashion does this.


Hm, maybe I wasn't clear. It's not actually /using/ the memory when it's committed.

But the sum of your committed memory across all applications must exist in some form on the host system.

So, for example it's a common performance optimisation to double the amount of allocated space when you grow anything in C++, what this means is that you're not actually using the space yet but malloc() and zeroing is kinda slow.

So, you have 128MB of ram which is your programs address space and you just doubled your array from 75MB to 150MB, well, that extra 22MB must exist. Even though you're only using 76MB.. even if the OS shows it as free. (which, it will)

Thems the rules, and I promise you that I have thoroughly tested this; as it was causing a really nice crash on my servers even though we had more than 50% of memory "free"


Memory Mapped files do exist in some form or another though. As the file itself. That's the point of memory mapping files. You can right now memory map every file on your computer. That's TB of files. There will be no physical RAM usage and no swap file usage unless you start to actually work with those files (at which point they will be paged in). This will show as 'committed memory' in task manager.

Your example above isn't memory mapping files. It's just allocating RAM. That does have to exist in RAM or the swap file. But that's not what 'committed memory' above shows. Which is the whole point. The column the article is telling people to use is misleading.


They're talking about mmap'ed files, not memory.


>Task Manager is doing the right thing here

I had a problem on my windows 10 pc for a long time, where I would clearly be running out of ram, task manager would show 100% usage, everything getting slow. However if I added up all of the processes using ram, it was nowhere close.

So something was using up ram that task manager gave me no visibility into. I had to download obscure tools and do some guesswork to figure it out. It would have been really nice if Task Manager could just report it in the first place (it turned out to be a network card driver with a memory leak in it)


Yep, this is particularly evident if you use Virtualbox a lot - the memory use isn't in the actual process but in one of the drivers. Has also happened to me with a VPN drivers.

Best tool I've seen to start finding where it's going is RAMmap [1].

1. https://docs.microsoft.com/en-us/sysinternals/downloads/ramm...


> committed memory is that in use + memory that's being paged out

That's not actually true though. You can see an increase in a process's commit charge without anything new being written to the pagefile. Commit is a check that the kernel writes to applications; you're confusing that check for cash in a wallet. You can also have commit without any virtual address space to blame for it through section handles or other tricks. It's complicated.


I'm not going to dispute that but just want to highlight it doesn't change the fact that committed RAM can show as extremely high just by working with memory mapped files.

Memory mapping a GB log file for example will absolutely show GB's of committed memory but in reality you'll only have the last page in actual physical RAM.


Right. When you memory-map a file, what you've essentially done is add a temporary new pagefile to the system (your mapped file), and when you work with memory backed by that file, it's no different from working with "anonymous" memory backed by the system-wide pagefile.


Aside: I have to say this is a genius perspective on mmap, and I now finally understand why the same syscall is used for both. I never understood the link.

Thank you.


I understand commit can happen without physical page file being written to. But you say commit can happen even without any increase in virtual address space usage? That seems strange, could you explain how it can happen?


Change a PAGE_READONLY mapping to a PAGE_WRITECOPY one. The commit charge is billed at VirtualProtect time, and the protection can fail if you run out of commit. The kernel doesn't have to commit anything for a PAGE_READONLY mapping because all pages in such a mapping are guaranteed to be clean and trivially evictable. Not so once you introduce the possibility of COW faults. Reserving a big range of address space and committing it a little bit at a time is a very common pattern.

Another thing: create a 1GB section. Map it, and fill it up. Unmap it. Map it again. What you wrote is still there. Between the map and remap, you have commit without corresponding address space.


The digits add to 0 in mod 9 and he's looking for precisely 9. eg. the sum of digits for 9^3 = 18. That's why you don't see 9's everywhere.


This post simply doesn't belong here on HN at all. I've flagged this.

Also i'm looking at the list of cases the ACLU is fighting https://www.aclu.org/defending-our-rights/court-battles

Things i see: A case preventing NSA widespread wiretapping, A case forcing authorities to obtain warrants for cellphone history, An FOI request for the use of widespread wiretapping, A case to force Nevada to provide legal representation for those who cannot afford it A challenge against the consitutionality of an executive order signed by the president A case against declaring people "enemy combatants" and claiming they have no rights

Can you think of a more positive organization? I suspect some people don't like some of the cases the ACLU brings but that's the whole point of the ACLU. They are there to make sure everything is correctly challenged in court, they have had cases where they have supported the far right an the far left. If the volunteer lawyers of the ACLU bring something that you think has no merit to court and they win don't blame the ACLU. Blame the laws. The ACLU are the one organization ensuring laws are correctly tested and implemented.


Belgium was bristling with defenses in WW2. In the same way France, Poland and the Western front of Russia were. They had networks of expensive forts with numerous gun turrets on them.

eg. https://en.wikipedia.org/wiki/Battle_of_Fort_Eben-Emael

Wars tend to expose obsolete military doctrines. Big fortresses on land and sea (battleships) that can bombard targets many miles away are great if the enemy doesn't just fly in.


>we know that light can be bent by the warping of space-time due to gravity

That literally is light obeying gravity. Gravity interacts exclusively by the warping of space-time. Gravity and the Higgs field are completely 100% unrelated.

https://profmattstrassler.com/2012/10/15/why-the-higgs-and-g...


Thank you for that link, it clears up a lot!


> Gravity and the Higgs field are completely 100% unrelated.

That is an incredible insight for me -- I never thought of it that way. So the thing that creates inertia (mass) is not the same as the thing that generates gravitational force -- they're just correlated in the types of matter that beings like us can interact with?


IANAP, but IIRC while the higgs field gives mass to electrons, quarks, the W bosons and a few more particles, the vast majority of the mass of the atom (and thus of what we normally call matter) is due to the binding energy of quarks and gluons and not related to the higgs field.


In that case the government led the charade, from claims of 'nuc-u-lar weapons' (as Bush pronounced it) to the claims of buying Yellowcake from Niger, the government led it and the media just went along.

In this case the media is doing it themselves from the start.


A brief explanation of why primes peak at repeated multiples from a layman who's wondered why before.

Obvious first example all primes above 2 are of the form 2x+1. An obvious repeating pattern of primes.

You can take this a small step further. All primes above 6 are of the form 6x+1 or 6x+5. Anything else is a multiple of 2 or 3. Above 6 only 1/3 of numbers are worthy of being considered prime. This is a slightly less obvious example.

A small step further - all primes above 30 are of the form 30x+1, 30x+7, 30x+11, 30x+13, 30x+17, 30x+19, 30x+23 or 30x+29. Anything else is a multiple of 2,3 or 5. So above 30 only 8/30 numbers are worthy of being considered prime. See how we've created a new pattern for the multiple of 2x3x5 to rule out a swath of prime candidates..

I could repeat this each prime found. eg. I could take the common multiple of 2,3,5,7 (210) and create a similar pattern for all numbers above 210 that rules out the repeated multiples of 2,3,5 and 7. (leaving us just 58/210 numbers worthy of being considered prime).

This is why you see peaks of primes at various repeating multiples. For every new prime found you can take the multiple of it and all previous primes. From that you can rule out primality for various offsets to any multiples of that number. So primes above certain numbers can only possibly exist in certain forms. Which is why you see primes at repeated patterns from each other - the primes can only exist in those forms.


This pattern forms the basis for wheel factorization [1], a faster way to factor a number than naïve trial division.

[1] https://en.wikipedia.org/wiki/Wheel_factorization



Author of this thread here. That article states that the above sieve was created in 2003.

I actually wrote and described it back in 2002 - https://forums.overclockers.com.au/threads/show-off.71630/#p...

I didn't think much of it back when i did it and i'm not a mathematician, just a hobbyist. I guess i should publish my prime hobbies more...


Have you though about mailing it to authors of the paper? The should somehow cite you, I guess.


That seems so obvious when you put it like that. (Although I guess all of mathematics is either "obvious" or "unsolved".)


I think most of what we've solved in math is not obvious.

Some accessible examples would be Fermat's Last Theorem or the four color problem.


Small correction: there are only 48 (not 58) numbers out of 210 that are not multiples of 2,3,5,7.


I haven't done all the math for this (I've deeply investigated the pattern for 2x+1) but it seems like this would be an obvious and intuitive result of primes. You are still generating primes from primes. Yes, you find more primes, but the computation is still dependent on primes. I'm still of the opinion that there is no complete pattern to the primes.

I'm assuming the researchers do not have the intent of confusing a crystal lattice structure with an actual mathematical lattice, because while they possibly may share similar influences in their models, one is math, the other is physics.

#keepmathpure


I've heard this sentiment before and I don't really understand it. There's no separating physics and math. Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.


> There's no separating physics and math.

There is. Even though most physics research is extremely mathematical and abstract these days, it's still ostensibly grounded in empirical science. Math is not science, it just provides useful tools and insights for studying science. Unlike physics, the disciplinary imperative of math is not to provide us with truths about this world or any other world. Its imperative is to tell us what must follow as a consequence from a given set of assumptions and definitions. This is a very important philosophical dichotomy because it means that even the most lackadaisical, abstract problems in physics (such as moonshine in high energy physics) are still grounded in something "real." Math need not be grounded to anything real; it can be decoupled from what is real or even possible entirely.

> Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.

I'm not one to be elitist with regards to pure versus abstract mathematics so I sympathize with your point here. That being said, purely abstract mathematics can be extremely useful even if it doesn't ultimately relate to the real world. Consider what G.H. Hardy wrote nearly a century ago in A Mathematician's Apology:

"...both Gauss and lesser mathematicians may be justified in rejoicing that there is [number theory] at any rate...whose very remoteness from ordinary human activities should keep it gentle and clean."

If only Hardy had lived long enough to see his pure and beautiful number theory sullied with the applications to error-correcting codes and cryptography.


While I certainly understand that math and physics are separate concepts, physics as we know it wouldn't be possible without math. I'm sure in your mind you can separate them, but if you took math away from physics, we wouldn't have modern physics.

Math is how physics is given practical application, if anything, that means science is more abstract than math is.


It's about direction of influence.

Physics should not influence math. Math has to be a consequence of it's own axioms, or have it's properties tested against it's own constructions.

Otherwise math falls apart. Then it's useless for physics.

Practical application is wonderful for math. But the direction math is crafted and interpreted matters a lot.


I don't agree with this. And honestly, I really think that depends on what foundation you rely on to think with, work with, create with, test with, and check your own tests with. Physics does not have to use itself to understand itself. Math does.


> Physics does not have to use itself to understand itself.

Could you explain what you mean by that?


It's simple. Physics uses mathematics to construct equations that describe properties of physics.

The difference is physics has reality to test against - observations that can be measured. Math does not have this. Math's only metric against itself is itself.


In your example you started at 1/3 (33.33%) and moved to 58/210 (27.62%). What does this limit approach ad infinitum?



Sierra was a big fan of 'cruel' game design. Fail to do something at the start properly? The game will let you continue but you can never win.

http://tvtropes.org/pmwiki/pmwiki.php/UnwinnableByDesign/Sie...

And yeah it sucked.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: