Hacker Newsnew | past | comments | ask | show | jobs | submit | more k0stas's commentslogin

While it's appealing to believe Pinker, his treatment of statistics and probability of war has been debunked. For a mathematical analysis of the history of war see Cirillo and Taleb's paper "On the statistical properties and tail risk of violent conflict" (https://www.fooledbyrandomness.com/violence.pdf).

From the paper: "Accordingly, the public intellectual arena has witnessed active debates, such as the one between Steven Pinker on one side, and John Gray on the other concerning the hypothesis that the long peace was a statistically established phenomenon or a mere statistical sampling error that is characteristic of heavy-tailed processes, 16] and [27]–the latter of which is corroborated by this paper."


Unsure, but maybe related to newer considerations/nuance:

Scaling theory of armed-conflict avalanches https://www.santafe.edu/news-center/news/avalanche-violence-...

Podcast: https://complexity.simplecast.com/episodes/39-9ugXDtkC


I always thought the intention was to get the same deal as Australia: https://www.bbc.co.uk/news/world-australia-56163550. It's not a surprise that Canada's media industry wanted the same deal and the government was willing to go to bat for it.

The BBC article from 2021 linked above even says "The law is seen as a test case for similar regulation around the world."


> Their point about diff pairs being more resistant to noise is correct, but it’s not the primary reason for using diff pairs.

The article is correct. This comment is wrong.

> Differential signals, at a physical layer level, are faster than single ended IO. It’s because you have double the current drive/sinking capability with two drivers.

The load is also double in a diff pair compared to a single wire, so the net effect is a wash compared to a single wire.

> You’re also getting capacitive coupling between each leg of the pair working in your favor, which keeps your edge transition nice and fast.

The opposite is true. Differential capacitance effectively appears 2X higher than the nominal capacitance to differential signals, making it a drawback of differential signaling rather than a benefit.


When you see a differential pair, simply imagine two independent lines that are each referenced to ground. (That's close to what happens on the PCB anyway, if a ground plane is present. Most of the return current ends up ground-referenced.) Two lines that would have exhibited 50-ohm characteristic impedance in a single-ended circuit will form a ~100-ohm diff pair.

In other words, the capacitance isn't doubled, since the capacitance is split by the imaginary ground between the two lines. It looks like two caps in series, not in parallel. Same is true for the load resistance.


I have designed PCIe compatible transceivers and this will be the last comment I make about it because correcting hardware nonsense on HN is only of transient interest to me.

Everything I wrote is 100% correct and in fact incontrovertible. Whether there is a little or a lot of differential capacitance does not change the fact that the differential portion of the capacitance has a 2X effect on the differential signal (as opposed to the common-mode signal, which it has no effect on). This is supported by basic math.

If capacitance is to ground then it is not differential capacitance so it is not relevant to this discussion. It may be true that differential capacitance is not a significant contributor to the impedance of PCB differential traces but that does not change the fundamental result (similarly, the principle of photovoltaic conversion still holds true in the dark even though there is little light to convert). And PCB traces are not the only kinds of differential pairs. Diff pairs exist inside the integrated circuits that drive the PCBs where they operate less like transmission lines and more like lumped capacitances due to the frequencies of interest compared to the dimensions of the conductors. In these circuit and conductor structures, differential capacitance can be significant and this is what OP was talking about since he was talking about the legs of the driver (transistors). OP was just wrong about the differential capacitance being good for speed. It's bad for speed.


Meh. There isn't a whole heck of a lot of capacitance between two parallel traces. Not compared to a single-ended trace that is (almost necessarily) referenced to one or two planes on adjacent layer(s).


> The problem is that engineers like to work in silos and they do not communicate well with other engineers.

This is organizational problem that should be solved so that meetings are avoided as much as possible. The main way I have seen this solved is by engineering teams having clear APIs between. I use the term API loosely. This may be a software API between software teams but it may be a document, hardware interface or other sort of interface.

Without this sort of structure, communication requirements grow quadratically with each additional person, i.e., n*(n-1)/2 where n is the # of people.


I got obsessed with this paper recently, to the point where I have read most of the "Unknown Air Force Document" that Thompson references with giving him the idea of Trojan horse. The document was later identified and is declassified and publicly available [1].

> If one reads the original paper, one only finds a description of this attack as a thought experiment, leading one to conclude that any claim of a real-world attack by Thompson was an urban myth due to exaggeration.

This is true although Thompson gives some tantalizing hints in the paper.

In the introduction, he writes " I would like to present to you the cutest program I ever wrote." So he definitely wrote it and at least played around with it.

Later on in the "Moral" section, he writes "The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.)"

This appears to be an admission but not quite strong or direct enough to validate he implemented and used the Trojan horse so it is great to read this post.

[1] https://csrc.nist.gov/csrc/media/publications/conference-pap...


Thanks so much for sharing the older MULTICS report; this is fascinating stuff.


I was just trying out the API and noticed the same thing. I found the API key near the bottom on my https://my.nextdns.io/account page.


More information available at https://artsandculture.google.com/story/SQWxuZfE5ki3mQ.

I am curious if any art-inclined people on HN can comment on how close they believe the results to be to the original.


> I agree these are annoying patterns, but I honestly don’t notice them. I don’t read research papers like a book, front to back.

I think you've hit the nail on the head. The articles "sins" don't matter because experts skip over these parts. And this is the reason they are not a focus of the authors.

I still think the article points out things that can be improved but the benefit is taking a good paper and making it more palatable to people who are not experts or on-the-way to being experts, thus expanding the population of people who might cite the paper.


> The articles "sins" don't matter because experts skip over these parts.

Yes and no. Yes in the sense that if you are an expert trying to stay current in your field, you will skip around a lot in most papers.

But no in the sense that when you find a paper that is especially relevant to what you are doing, you will read and scrutinize every single word, symbol, and figure in that paper until it’s completely mined of all relevant information.

I think one reason the intro isn’t always a huge focus is because it’s literally written last in many cases. There are typically page limits, or a cost per page. What you’ve got to say about your research could fill hundreds of pages, so you already have to cut that down. Once you’ve said what you need to about the actual work it’s probably already past the submission deadline, and you don’t want to spend forever writing an intro that will just end up costing you more pages. It’s basically going to fit into whatever space is left to round the paper to the next full page.


I was wondering how efficiency is defined for a port.

I found what I think is a more original post related to the list: https://ihsmarkit.com/research-analysis/new-global-container...

Inferring from the description in the link, it seems like the overall ranking is based on a combined metric. They do mention "minutes per container move" as a key metric ranging from Yokohama's 1.1 minutes to Africa's average 3.6 minutes.

From a high-level perspective, and in analogy with computer systems, it makes sense that time efficiency is the most critical metric.


Seems to me like the two numbers that matter are:

Mean time from docking to last container being outside of port property

Mean cost per TEU


It already has been a problem in terms of gate leakage, although largely mitigated by material improvements.

Gate leakage is the phenomenon of quantum tunneling through the gate dielectric barrier and started appearing as gate dielectrics became thinner and thinner. Gate leakage was mitigated by moving to higher k dielectrics (from silicon dioxide, SiO2, to more exotic materials that include other elements such as Hafnium).

Higher k dielectrics allow for the same capacitance per unit area and channel control with a thicker physical gate compared to plain SiO2, reducing gate leakage. This technology change came along with metal gates (which used to be polysilicon) and were a combined advance that Intel incorporated a few years before before TSMC, IIRC circa 2008.

This is a circuit designer's perspective. Someone who actually understands device physics and material properties can chime in to correct me.


Your nickname rings familiar. Ex-Xilinx by any chance?


Do you have any books that you recommend on this topic? would love to get my hands dirty to the degree that i can with this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: