Hacker News new | past | comments | ask | show | jobs | submit | appstateguy's comments login

This makes intuitive sense to me, but isn't the issue that relativity states that time is intrinsically coupled with space? Ones experience of time relative to another's is directly related to the topological features of the space they're in. If it's more curved, time also "curves" and slows down relative to another observer in flatter space.

How does this notion of time become decoupled from space?

One way I imagine it is that "time" as a concept encapsulates at least two properties:

1) The degree of freedom (i.e., dimension) through which things can change; and 2) The unidirectional flow of causal events

Relativity seems more concerned with defining time in terms of casual events (e.g., event horizons) than its dimensionality. If we define "fundamental time" as a dimension that allows for change can it then be decoupled from space?


The core equation of special relativity that couples space with time is (assuming the speed of light is 1):

    ds^2 = dx^2 + dy^2 + dz^2 - dt^2
Different observerse disagree on what each individual term of the right hand side of the equation are; but every observers agrees on what ds^2 is.

Thinking in terms of 3 dimensional euclidean space, this makes sense. If you fix your 3 dimensional coordinate system and pick 2 points in space, you can have:

    ds^2 = dx^2 + dy^2 + dz^2
Another observer could pick a different orientation for their coordinate system, and arrive at different values for dx, dy, and dz; but they would still have the same ds. This is just the pythagorean theorem. The distance between two points is the same regardless of how you define your axis. This also means that your 3 spatial dimensions are inherently coupled; because there was no particular reason to pick your axis the way you did.

Simmilarly, in the 4 dimensional spacetime defined by the metric:

  ds^2 = dx^2 + dy^2 + dz^2 - dt^2
There is no particular reason to pick the particular time axis that you happened to pick. It is coupled with the other 3 dimensions in exactly the same way that the 3 dimensions are coupled in euclidean space.

The only complication here is that rotating your axis under the Lorentzian metric require the Lorentz transform; whereas rotating them under the Euclidean metric requires the Galilean transform.

The coupling described here involves no notion of causality. Nothing in the metric prevents a path from traveling in both directions along the time axis.


Well, even in this metric, there is an obvious separation between the 3 spacial dimensions and time. For example, a negative ds^2 indicates a fundamentally different kind of distance than a positive ds^2. In fact, events separated by positive distances would be considered entirely independent, while those separated by negative distances can have influenced each other; events for which ds^2 = 0 are simultaneous in any frame of reference.

Further, if two events have are separated by a negative ds^2, then all observers will agree on the order in which they happened, though they will not agree on the length of time that passed between them, or the relative positions.

Note that I'm using your version of the equation for the definition of ds^2 > or <0, though in general I've seen it expressed the other way around, ds^2 = dt^2 - (dx^2+dy^2+dz^2).


There is a clear assymetry between the temporal dimension and the spatial dimension; but it is not clear to me that this implies a seperation.

We can distances with ds^2 < 0 timelike, and ds^2 > 0 spacelike. We say that events with a spacelike cannot influence each other; but there is nothing in relativity that requires that (unless you introduce causality as an additional assumption, which we generally do).

This gets more messy in general relativity when you allow for large masses (read. Black holes).

In the case of a non spining black hole, we spacetime is described by the Schwarzschild metric. Expressing this metric in spherical cordinates, you find that when you pass the event horizon the sign of dt^2 and dr^2 flip, where r is distance from the singularity. This means a "timelike" seperation means that events are closer to each other in the time dimension; and events with a "spacelike" seperation are farther from each other in time. That is to say, "time" behaves in the way we think of as "space", and "space" behaves the way we think of as "time".


Time measurement is coupled to space, not time itself.

Actually, time does not exist. It's simply the rate of change of matter.


What is that "something" that is happening all at once? In my naive understanding of cosmological theories is they seem to end in probabilities (e.g., eternal inflation) that emerge from a kind of (oxymoronic) "chaotic static", but what does that mean for this to "happen all at once"? That statement seems to imply there's no degree of freedom through which something can occur (at least casually).

I suppose the question comes to, what is time? I imagine "time" to be a degree of freedom through which something can change. I understand some see time to be related to entropy, and that it's an emergent property of this thermodynamic property but this seems too limiting. It would seem entropy is more-so an explanation for the arrow of time, but not time itself.

If time is defined as the dimension through which something can change then should time not be fundamental to any cosmological theories that extends beyond the Big Bang?


> What is that "something" that is happening all at once?

Not "something". Everything. Like "What is possible to happen? Everything is possible". If you can't divide that possibility by time period, everything possible will happen with probability of 1. How improbable is that universe will happen during 1 trillion years? Very improbable. But if you can't tell in how many years universe will happen, it just will exist.

> but what does that mean for this to "happen all at once"

If you don't have time to make difference between two things happening one after another, they will happen both at the same exact "time".

> If time is defined as the dimension through which something can change then should time not be fundamental to any cosmological theories that extends beyond the Big Bang?

The problem is, time depends on local state of space. More energy/matter in a chunk of space means time is slower relative to other chunks of space. So when you try to go into past with mass in chunk of space increasing into infinity, that time notion breaks.


A series of causal events requires network reccurance for time to be calculable. If it's not calculable it can't be said to exist in any mathematical sense. So, a series of non-reccurent causal events at the fundamental level processes causally without the existence of time.

The same is true of "space" for the same reason. A causal network without reccurence (a tree) has no way to compare objects since there are no horizontal relations and it cannot "look" at backwards relations.

A non-reccurent causal tree has multiple instances of "here" and "now." But without anything to compare a given here/now to the concepts are internally meaningless.


> Infinity, is still unfathomable but at least it is easier to fit Maths and Physics around.

Arguably Maths requires this same discontinuity due to Gödel's Incompleteness Theorem. Consistent axiomatic systems require a Gödel sentence that is unprovable in the system; it requires a logical tautology or else your system will contain an infinite regress.

I think of a logical tautology as a kind of "something from nothing", but I suppose that is debatable. The implication of this property of axiomatic systems on reality is also debatable, but wouldn't any system we come up with to describe reality need to abide by these same rules?


> Consistent axiomatic systems require a Gödel sentence that is unprovable in the system; it requires a logical tautology or else your system will contain an infinite regress.

I don't see the connection between these two statements. Yes, in a consistent system with a finite description, a statement in that system which can be interpreted as "the system [description of the system] cannot prove [this statement]" cannot be proven (also, where "can be interpreted as" is taken to mean the sensible thing that it should mean) (though, sufficiently weak systems can have languages incapable of expressing such a statement) .

But the second part of that sentence, after the semicolon, appears to be talking about something else? It seems like it is describing the Münchhausen trilemma , except applied to mathematical proofs, and giving an argument that a system needs axioms (and/or rules of inference). While a system of proof does need axioms and rules of inference in order to conclude anything, this is not because of the incompleteness theorem.

The two do not seem particularly connected to me. So, I don't see why you connected the two statements with a semicolon.

Perhaps I misunderstood what you meant by the second statement in the sentence?

Also, when you say "logical tautology", are you talking about purely logical tautologies, or are you talking about axioms? The way you are using it makes it seem like you are talking about axioms, which I wouldn't call logical tautologies. But maybe you do mean what I would mean if I was to refer to "logical tautologies"? In that case, uh, I don't think logical tautologies are all that much of a foundation?


Gödel's theory relates to systems of mathematical proof (epistemology), rather than mathematical models of reality (ontology). The problem comes not with fitting maths around an infinite universe (which arguably requires fewer assumptions than for a finite one), but with building maths in the first place.


We need infinity in mathematics for proofs, limits, etc. Even magnitudes of infinity have useful properties in game theory, signal theory. There is something “real” about infinity that “works” in our Universe. The axioms of infinity in math seem to “work”. Could there be a better set of axioms? Sure, but no one has come up with any.


The United States has a substantial advantage in that the dollar is considered the world's reserve currency. This is in large part due to the "petrodollar" as the vast majority of oil is only sold in dollars. This puts a large demand on US Treasuries from foreign countries & banks so that they can meet this dollar demand.

The dollar's dominance is slowly weening as Iran now sells oil to China using the Yuan. The IMF also has a potential replacement reserve "currency" called Special Drawing Rights (SDR). Though it's not technically a currency but instead it's more an index based on basket of currencies. That basket includes US Dollars, Japanese Yen, the Euro, Pound Sterling, and most recently the Chinese Yuan.


This St Louis Fed article says the US made more money from Seigniorage when inflation levels were moderate than when they were low. So this is perhaps a thin pretense for a never-ending debt bubble.

https://files.stlouisfed.org/files/htdocs/publications/revie...


SDR sounds like Facebook’s Libra without them blockchain nonsense.


How do you draw that conclusion?


Basket of currencies without a blockchain or digital ledger component.

Although the Libra Wikipedia page indicates that idea has been deprecated:

> As of January of 2020, Libra is said to have dropped the idea of a mixed currency basket in favor of individual stablecoins pegged to individual currencies.


What differs this scenario from individual neurons firing together in order to make the human mind? Can you keep taking that line of thinking down to the subatomic properties of the neurons themselves? I'm guessing this is the argument for panpsychism. The emergent property of human consciousness could be seen as the culmination of all the individual parts themselves have some form of primitive consciousness.


Isn't that argument roughly as useless as attributing the ability of a watch to show time down to each individual mechanical component? It's technically true, but not very useful to elucidate what actually makes the watch a watch. For example, it sounds kind of silly to say that a single watch gear has some form of primitive time-keeping ability, or has some form of primitive ability to tell me whether I'm running late. The universe is rife with examples of emergent complexity in which the whole is greater than the sum of its parts.


Well it is still useful but not to you directly. The distinction being that the right intermediary could make use of it.

If you had a physics model it could describe it perfectly but it would likely involve a very large group of matricies and linear algebra constraints. You technically could work through every equation given a large set but examining the watch yourself would likely be quicker, if not learning to be a sufficiently good watchmaker.

However if you assemble a computer model to handle every component's interaction individuallt and a physics engine to do number crunching you could get a working mathematical model of the clock.


Reading through the article's comments there's one from "LegalBeagle" that criticizes the man's lawyer for using the wrong argument in the lawsuit, and instead should've filed the lawsuit as a violation of the 5th Amendment.

Luckily he must've gotten a better lawyer, because an appeal was allowed to continue under grounds the DEA violated the Takings Clause of the 5th Amendment:

> Because plaintiffs have stated facts sufficient to demonstrate that the government physically deprived them of property for the duration of the controlled drug delivery operation, we hold that plaintiffs have stated a claim for a taking compensable under the Fifth Amendment. We therefore deny defendant’s motion to dismiss pursuant to Rules 12(b)(1) and 12(b)(6).

https://www.courtlistener.com/opinion/4467728/patty-v-united...


To be fair, they tried to go after AT&T to kill the Time Warner deal (because they own CNN), but the judge basically threw out the case.


maybe you shouldn't have waited until the absolute last minute to do this or it wouldn't have been replaced.


I am not sure what you are implying here. I have zero interest in maintaining the project. This was the last blow to a dying project.

The king is dead, long live the king.


meh. ultimately this is OSS. the maintainer was negligent. if you're not a good OSS maintainer, you don't deserve to maintain control. it's not reasonable to be indefinitely patient when the original author shows no initiative to maintain their project


They didn't even try to contact the author?!


dstat was basically presumed dead due to lack of upstream activity. I recall the internal discussion was something like

A: "Whoa, there is no dstat in RHEL8? Customers are gonna complain."

B: "Yes, it's because it's Python 2 only and upstream is dead. Tell customers to use PCP"

A: "Srsly? Nobody thinks of the scripts? Besides, dstat output is so nice."

C: "Well, there is some toy example to print pcp data in dstat format."

A: "Do we really want to tell customers 'there is a toy example'?"

C: "Give me a day or two to pimp it up."

And then C got a bit carried away. :-)


Why would you try to contact someone who's abandoned the project?


Maybe because it is the nice thing to do?

And it's not like I have disappeared from the face of the earth. I am quite active on GitHub.


It was probably the expansion of financial instruments like Collateralized Debt Obligations (CDOs) [0] along with shadow banking mortgage companies (like Country Wide [1]) that simply sell mortgages to banks that in turn securitize the mortgage into CDOs.

[0] https://en.wikipedia.org/wiki/Collateralized_debt_obligation [1] https://en.wikipedia.org/wiki/Bank_of_America_Home_Loans


This is the correct answer, I think. The people who made the mortgages were no longer the people that owned the mortgages. The banks didn't even hold on to them, they sold them in tranches to investors. So no one in the actual industry had any incentive to do anything other than close mortgages.

People could get no down-payment mortgages, could get mortgages with bad credit, could get mortgages on houses they couldn't afford, because all the mortgage originators just fudged the paperwork. No one wanted to find a reason to not provide the mortgage.

Everyone being able to buy a house heated the market up, and as people got used to the market going up and up, it was seen as a good investment, and more people bought more expensive houses they couldn't afford, and mortgage originators did more shady things to make it happen.

A lot of these mortgages were adjustable rates that started at a really low rate, and in 3 years could shoot up to a much higher rate. An optimistic consumer wouldn't worry much about that. But when the time came, some people couldn't pay the crazy increase in their mortgage. The foreclosures coming onto the market depressed the market.

So the bubble finally popped, and someone owned a house that they bought for $750,000 with no money down, an adjustable mortgage for the whole $750,000 that started at 4% and in three years shot up to 8%, and when that happened the house could only be sold for $450,000. So they just stopped paying the mortgage and walked away. The foreclosures further depressed prices, created a very nasty cycle.

I worked in land records at the time, and the standards for the mortgage originators were just horrible. Not bothering to record mortgages in the correct town, not bothering to do a lot of things.

There are a few movies that are really interesting that described what happened, real life thrillers IMO. "Margin Call" is awesome, "The Big Short" explains what happened really well and is pretty funny at times, and "Too Big to Fail" is from the regulators point of view. It's kind of awkward but still really interesting.


I got my start as a DBA working for a company that was founded to serve as oversight for this process. We'd assign a risk label to a loan (from the originator) and then weight our aggregate risk against the documented risk of the financial instrument (a security from the "bank"). Finally, there was a group within the company to follow up with the loan servicers to ensure that they were properly servicing (erm, collecting) loans within that instrument.

It was insane how many new dimensions we'd have to create (on what seemed like a daily basis) for the variety of security and documentation types that the banks would generate.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: