Crank science is easy to separate from non-mainstream approaches. Crank science is "everybody is selling nobody is buying" business. There are no physicists inside *or* outside mainstream who buy Consa's crank theories.
(I explained why the narrative is not valid in another comment).
One can get grants in physics nearly only if being mainstream ...
Consa brings concrete arguments regarding g-factor, I still haven't seen any concrete explanation, only saying that it is fringe science because of criticism of mainstream ... but he quotes mainstream papers.
“I must say that I am very dissatisfied with the situation because this so-called ’good theory’ does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!. ”
and Feynman:
"The shell game that we play is technically called ’renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”
This is an outdated, pre-1970s view of renomralization. Thanks to work of Wilson (1982 Nobel prize) and others on the renormalization group, we have a much better understanding.
Thanks, I will read, but generally beside e.g. the gravity problem, with increased accuracy there appear inaccuracies all over the standard model, so maybe it is worth revisiting QED? Are you saying the g-factor inaccuracies are not a problem?
The discrepancy in the article you linked is completely unrelated to how well we conceptually understand renormalization (which is broadly applicable to many quantum field theories, not just the standard model). It could be the case that the standard model is wrong, but my claim would still stand.
This patent covers rANS variant which is used for example in https://en.wikipedia.org/wiki/JPEG_XL - if granted, only Microsoft will be able to make its hardware encoders/decoders.
Lorentz invariance appears naturally in practically all theories with waves - its violation would be a huge surprise.
To see it, understand STR, the perfect model is sine-Gordon: just many coupled pendula - we get particles ("kinks") with rest mass, which are created/annihilated in pairs, the mass grows exactly like in STR and is released while annihilation ... while moving these particles undergo Lorentz contraction (speed is limited by speed of massless waves) and oscillating particles ("breathers") slow down (time dilation) - exactly like in STR.
I feel like this is a common misconception amongst some physicists. Lorentz invariance doesn't "appear naturally" in modern theories depending on how you develop it. Usually, one chooses a lagrangian that yields a lorentz invariant action and so all physical laws and thus solutions are lorentz invariant consequently. Lorentz invariance is a fundamental assumption...upheld by experiment. Those theories will then admit solutions (usually linearized ones (read quantized)) that appear as waves.
There are lots of non-relativistic models using waves. Nothing special
about them. The most compelling argument for relativity is causality.
You can reconstruct spacetime – up to conformal transformations – just
from the causality relations. I can’t even imagine what physics would
be like w/o causality.
GR does not imply causality nor does it enforce it. In fact GR works in a non-causal universe without a problem.
2 very sensitive measurements conducted within the past year seem to suggest (if GR is true), that we are in a universe that lacks causality. 2/3 LIGO detections imply one of the merging pair of black holes should be a naked singularity.
GR allows naked singularities. It models them fine. GR just stops being globally deterministic.
If you look up the history of GR some mathematicians in 50's made some really weird proposals for non-causal universes that would appear locally causal. But there isn't a way to test this. So it is more pure mathematics or philosophy then physics.
Thanks for the links. I find it unfortunate that often in science (and especially with the LIGO data) much is written about what could possibly be lurking in the data but isn't actually favored over our current understanding.
This creates more interest, but can obfuscate what the real situation in the field is. In this case, while Gravastars are certainly something many scientists actively do and should consider, there is no real evidence from the LIGO data that favors the hypothesis of "we are in a universe that lacks causality" over the observation of the merger of two Kerr black holes.
I think you're being a bit harsh. Is there any empirical evidence that favors classic black holes over gravastars, or is our "current understanding" just a matter of what we thought of first? If the latter, take a chill pill and let us enjoy the possibilities. :)
> GR does not imply causality nor does it enforce it. In fact GR works in a non-causal universe without a problem.
One of the key assumptions of GR is that spacetime is globally hyperbolic. This implies causality. You can’t guarantee solutions of the Einstein or Maxwell equations w/o this assumption.
If you restrict the speed of propagation of interactions (of massless waves), you nearly automatically get STR ... like in sine-Gordon model - speed of massive kinks becomes limited, and kinks are being contracted to zero while approaching this limit.
This was covered in my undergraduate (second-year) SR course. The idea is that if you accept that Maxwell's equations apply in all inertial frames (speed of light is constant) and that causality is conserved in all frames, the result is that Newtonian mechanics requires adjustments for effects that are called "special relativity".
ED: This is the starting point of the Causal Set [1] program. Don’t
ask me for details, I don’t know any. But the wikipedia article looks
interesting. Seems they are trying to figure out how causality
restricts models with some level of discreteness.