For the ordinary natural numbers, 1 is the only number which you can invert, i.e. 1/2, 1/3, 1/4, etc. are not integers, but 1/1 is. If we take in all the integers, positive, negative and zero, then 1 and -1 are invertible. But things become much more complicated in other number rings. For example, in the number ring obtained by adjoining the square root of 7 to the integers (i.e. Z[sqrt(7)]), there are infinitely many invertible values. For example 3 sqrt(7)-8 is invertible. Its inverse is -3 sqrt(7)-8. But any power of this value is also invertible, so in fact there are infinitely many invertible values in this ring. Anyway, the problem with units (invertible values) is that they divide any number in the number ring (after all, by definition, they divide 1, and 1 divides everything). They therefore aren't very useful for unique factorisation (aside: unique factorisation exists for elements of Z[sqrt(7)], though not for every conceivable number ring). Because of this, it is natural to exclude units from being primes.
Interesting with a side note about toying with Excel.
When I plug in:
1/(3 sqrt(7)-8)
and
-3 sqrt(7)-8
...into Excel, there must be a rounding error. They resolve to:
-15.9372539331939000
and
-15.9372539331938000
...respectively.
Time to plug these into Wolfram. I get:
-15.93725393319377177150484726091778127713077754924735054110500337760320646969085088328117865942363083184519373501549238...
and
-15.93725393319377177150484726091778127713077754924735054110500337760320646969085088328117865942363083184519373501549238...
If you want to do exact computations, SAGE is handy, Free, and built on Python:
sage: k.<a> = QuadraticField(7); k
Number Field in a with defining polynomial x^2 - 7
sage: x = 3*a - 8
sage: x.is_integral()
True
sage: 1/x
-3*a - 8
sage: x * (1/x)
1
sage: (1/x).is_integral()
True
(Note: the is_integral() is because we define k to be QQ(sqrt(7)), not ZZ(sqrt(7)), so we want to check that our elements are really integers, i.e., lie in ZZ(sqrt(7))).
It would be nice if you could prepend them with four spaces, to place them in code blocks (they have overflow:scroll), or break them into multiple lines (or both).
It's kind of murky because in commutative algebra that is actually not the definition of a prime. It just so happens that the prime numbers are precisely the natural numbers with this property. It's fine to define prime numbers that way, but in more general contexts a prime element is actually an element p which cannot divide a product of two elements a.b without dividing one (or both) of them. It just so happens that the non-invertible natural numbers with this property also aren't able to be written as a product p = c.d such that c != 1 and d != 1. In other number rings this isn't necessarily the same thing.
Is something missing here? As I understand it, that general definition works like this:
6 divides 49, but does not divide either 4 or 9. Therefore, 6 is not prime.
But I'm pretty sure 1 cannot divide the product of two integers m, n* while failing to divide m and n individually. So while that general definition does exclude composite numbers, it doesn't seem to exclude 1?
The correct "general" definition also requires the prime element to be non-invertible and non-zero, thus 1 is excluded.
It should be noted that this definition excludes "composite numbers" only when the ring is an integral domain. In a general commutative ring this isn't necessarily true.
For this and the "general" definition to coincide, the ring in question has to be a unique factorization domain, which the integers are an example of. In any bigger subset of rings (integral domains or general commutative rings), there are rings for which this doesn't hold.
The rational roots of integers will cause interesting interactions, because when you multiply them you get back integers. In other words, the fact that sqrt(7) * sqrt(7) is an integer means that sqrt(7) interacts with the integers when you adjoin it. This is not the case for a transcendental number like pi.
More formally: This type of interaction is true for an irrational number if (and only if?) it is the root of a polynomial with integer coefficients. If you adjoin the root of a polynomial P(n) to the integers, the properties of the resulting set of numbers (e.g. factorization being unique) can be related back to properties of P(n). The nature of the connection has to do with the connection between a set of numbers and the set of polynomials of with coefficients from that set, and equivalence relations on that set of polynomials that are induced by P(n).
I see that makes perfect sense. In this particular case because you end up doing multiplying by the compliment of the denominator to get rid of the root there, get a difference of squares that are both integers and easily end up with an integer on the denomiator of the fraction. leaving it all as trivially provable that you'll always end up with a rational number in the end (maybe not an integer).
It is and it isn't. For example, you can adjoin sqrt(-1). This time there are finitely many units, but still unique factorisation. Instead adjoin sqrt(-17) and you get finitely many units but you also do not get unique factorisation.
Wow, so many comments from people that clearly didn't bother reading the article (all the way through). I suppose its understandable, I too had to fight the temptation to hit the back button and angrily type out "its just a definition!". Luckily I decided to actually read the whole thing through, notice that the author is quite aware of this, and be treated to a really interesting history of math.
Agreed. It is an interesting article to read. The survey cited is also interesting in its own right [1]. We often forget that ideas we find trivial and intuitive are only so because it took the work of many more people and centuries to hone their ideas which at that time would be have been new and ambiguous. This is especially true with mathematics because a lot of the field focuses on getting the right abstractions to complex ideas which when done correctly seem "trivial, natural, and straightforward." Also see the history of calculus [2].
function f(n,k){
var t = 0;
for( var j = 2; j <= n; j++ )t += 1/k - f(n/j, k+1);
return t;}
function p(n){
return f(n,1)-f(n-1,1);
}
If you call p(n) when n is prime, it will return 1.
If you call p(n) when n is a prime power (so, say, 4 or 9 or 16), it will return 1/power (so p(4) is .5, p(8) is .3333..., etc).
If you call p(n) with a number with multiple prime bases (so 6 or 14 or 30 or...), it will return 0.
Another way to compute this exact same function (given as (8) on that link) uses the famous Riemann Zeta function zeroes, although that is much harder to follow.
Now, the behavior of the Riemann Prime Counting function doesn't PROVE that 1 isn't a prime, which, as noted, is a question about definition. But what it does do is show that, in an extremely important context, a context that seems to be, mathematically, solely about identifying primes, 1 isn't behaving like the primes at all.
At some level you really shouldn't care why 1 isn't prime. It's not-prime because we defined it that way. 1 has some mathematical properties in common with the numbers we happen to call "prime numbers" and some mathematical properties which are different; in another world we could have called it prime and replaced "the prime numbers" with "the prime numbers greater than 1" in assorted theorems and EVERYTHING ABOUT MATH WOULD BE THE SAME.
Well, yes and no. If you're just talking about integer primes then sure. If you consider gaussian integers though (a + b*i where a and b are integers) then you would have to say "the gaussian prime numbers that aren't 1, -1, i, or -i.
The definition of primes that excludes units makes sense in more places than one that includes units (especially once you start talking about prime ideals instead of prime numbers)
Prime is a word not a fact of the universe, just like Planet is a word and not a fact of the universe. Those words, unlike most words, have definitions put in place by authorities. The authorities might change the definitions over time as a new definition's usefulness exceeds that of an old definition.
Practically:
It's more useful to think of primes and planets as not including their recent ex-members than it is to include them.
The problem of 1 not being prime could be easily solved by having a separate term, say "noncomposite" or whatever you like, that means the set of the primes and one.
It is not a prime because every integer greater than 1, must be made by a unique product of prime numbers, or is a prime itself. If 1 were prime, that would not hold true.
TFA addresses this and states that's not the actual historical reason. If it was, you could keep 1 a prime and rephrase the theorem to "a unique product of primes greater than 1".
I don't follow what you mean by "the actual historical reason". The article clearly indicates that 1 went through a long period of not being commonly considered a prime (for bad reasons), then a brief period of being commonly considered a prime (for bad reasons), and is now once more not considered a prime (for good reasons). It calls out the reason why we switched to the current view as being, at the very least, extremely closely related to the reason you're claiming is not "the historical reason".
Your parent comment is correct (or so close that I think correcting them is a bigger mistake); that is the reason 1 is not prime. We don't consider 1 "not prime" as an accident of history, we have the knowledge now to make an informed decision. The "historical reason" has fallen by the wayside.
This is related, by the way, to the question of 0! (zero factorial). By definition, it's 1. The most obvious reason for that is that it fills a bunch of different holes in various formulae that use factorials (Taylor series, virtually any combinatoric formula...). But that's an extremely instrumentalist approach. Is there any reason why we might predict that 0! = 1 without knowing in advance from combinatorics that we'd really like it to be the case?
Sure. What happens if we multiply together the first zero positive integers? Obviously, we get the empty product: 1. Hence the definitions I learned, where a prime number has exactly one factor (not two), and 1 has zero, not enough to be prime.
The OP pretty much dismisses the article and cites the one thing that is well-known, less interesting, and not the focus of the article. The article is way more interesting and lists a variety or reasons (the one the OP quotes, historical reasons, wrong reasons, and even one final twist).
The OP's comment looks like a case of "did not RTFA". Color me unimpressed.
>It is not a prime because every integer greater than 1, must be made by a unique product of prime numbers.
That property is a result of the natural numbers, and a definition of primality that excludes 1. It is not true in general, in under a definition of primality that includes 1.
If you think of every integer as being a product of some other integers, e.g. the product of [a,b,c] then you could say the algorithm for that is: take the first number of the list, multiply it with the product of the rest of the numbers, when you get to an empty list, just apply the empty product, which is 1 for obvious reasons (1a is identity). So then [p] = p1. Then by this definition [] has no factors, it's the empty product. Of course you don't have to think of it like this, but it might be useful to do so.
This has been covered by anaphor, but to respond to the question directly:
No, a prime number, for example 5, is not the product of 1 and itself (at least, no more than it's the product of 1/3 and 15). It is the product of itself only, so it can be represented as a set of factors like {5}. 1 is the product of nothing, and would be represented as the set of factors {}.
Multiplication, obviously, just combines the factors of the multiplicands. 6 (= {2,3}) times 2 (= {2}) equals {2,2,3}, 12. Hopefully that makes it clear why 1 should be {}.
What if we permitted 1 to be prime? In that case, 84 would also have the "prime" factorisation 1 x 1 x 1 x 2 x 2 x 3 x 7. That is, 84 could still be factorised, but it would no longer have a unique prime factorisation.
I did read the article. I was just pointing out that the poster's original statement seemed to exclude prime integers -- well, unless the empty product is a product of primes.
The definition is made for convenience sake based on what it implies. One being prime implies that other primes are evenly divisible by another prime but for some reason that doesn't stop them from being prime. So 1's primeness would have to be special relative to the primality of the other numbers.
Starting the primes at 2 makes all the definitions and implications simple, except for the one caveat that primes start at 2.
Either way you'll have to make a concession, we choose the one that only has to make it once.
The other way around you end up with things like: The square of a natural number is not prime (except 1). The product of two primes is not prime (unless one of them is 1). and so on...