Hacker Newsnew | past | comments | ask | show | jobs | submit | lavishlatern's commentslogin

I'm not sure what the point of this blogpost was. As far as I can tell, the author discovered eval() but is making it more complicated for no reason? There also isn't any actual patching going on.


Taking mobile orders usually requires accepting coupons/rewards programs which franchise owners are not always obligated to participate.


Imperial China is basically the Greek/Roman empire of East Asia. Both of these civilizations made deep and lasting impact on the culture of East Asia and Europe/North Africa/West Asia respectively.

Confucius is like Chinese Plato/Socrates.


China only really had a strong impact in China and Korea - though a lot of Chinese people tend to overstate their impact for cultural chauvinism. Outside of possibly the Viet region as well, Chinese culture has had little impact. Imperial Chinese culture has had relatively little impact in the rest of East Asia (Philippines, Japan, Thailand, Malaysia, Indonesia etc...), and most of the Chinese culture in these regions is from recent immigration in the past century or so.

EDIT: not sure why the post is being negged considering it's basically true. I suggest people actually travel around the region and find out how little Chinese culture has impacted East Asia. I am a long term resident of East Asia and I understand the culture more than most Chinese people.


China's influence on Japan has been huge... It's very odd to read a claim that Chinese culture has had little impact on Japan!

Of course there are specifically Japanese aspects but more often than not there is Chinese influence. Architecture, writing, religion, dress, food, everywhere.


You could argue that other cultures have had similar impacts on Japan as well, including Western and Indian. What I disagree with is the idea that China is the "Greece of Asia" which is laughable at best. How exactly has the Chinese had a major influence on Japanese religion, architecture and dress? Chinese food shares the table with numerous cuisines from across Asia, and Chinese writing is also shared with Japanese, Western and Indian writing influences as well.


It's easy to come up with many examples of major Chinese cultural elements in the surrounding countries.

I'll give just one obvious example: Kanji. Kanji is obviously a major part of Japanese culture. It literally means "Chinese characters," because that's what it was adapted from.[0]

Or to give you just one more example, because it's incredibly striking: Japan's own name for itself, Nihon/Nippon, is borrowed from Chinese. The same is actually true of "Vietnam," which is also a loanword from Chinese.

0. https://en.wikipedia.org/wiki/Kanji


But relative impact is low. Latin and Indian scripts are used throughout Asia as well, including Japan. i.e. to say that it is the "greece of asia" is a bit over-chauvinistic.


Somewhere around half of the words in modern Korean, Japanese and Vietnamese are borrowed from Chinese. The impact of Chinese culture is massive across the board in these countries.


Half the words in modern Japanese is a long stretch. All Japanese words have two readings so obviously you can say that, but only one of those readings is used in daily conversation.


The linguistic influence of Chinese in Japanese is similar to the linguistic influence of Latin and French on English. The more elevated a subject, the higher the percentage of Latin or French borrowings.

Everyday spoken language will have a lower rate of loanwords, but writing about any complex subject will have a higher level.

Once you get into philosophy, politics, literature, etc., you're dealing with a high level of borrowing, both in the case of English and of Japanese.

But regardless of whether the true rate of loanwords from Chinese is 20% or 50%, this is a massive level of linguistic influence. And again, this is just one aspect of China's cultural influence on Japan. I mentioned the Kanji writing system earlier, which is very important in Japanese culture, but I could also point out many other areas where there's significant Chinese influence, such as literature, philosophy or religion, or even more banal things like the game of Go, which is one of the most popular games in China, Korea and Japan.




Common practice for the websites of law firms (and some other fields like private equity) to have the headshots of all employees on their website with job title and description in a nice table.

Example of a different firm where you can even filter by school: https://www.wlrk.com/attorneys/?asf_l=View%20All


A lot of people in this thread are criticizing this move, but let me offer an opposite view.

One of the largest electronic health records systems has code that predates the UNIX epoch. Much of the time handling code is custom written to deal with this. However, the code was so poorly written that the system would lose data during the double 1 am window that occurs during daylight savings time shift. Hospitals would just shut off all of their computers during this time to deal with it.

As the article notes, issues with leap seconds have also brought down reddit and cloudflare. Many people in this thread are treating this like some sort of display of incompetence, but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right. A sign of a good system is one where it is difficult to fuck up.

IMO it is better to guarantee that time always moves forward rather than trying to match computer time to human time.


There’s no need to redefine UTC. If you want a fixed monotonic reference clock you want TAI time (the atomic reference) not UTC.

https://www.nist.gov/pml/time-and-frequency-division/nist-ti...


I don't see how replacing all UTC in software with TAI is more realistic than breaking UTC sync with UT1 (isn't it literally doing the same thing?). The whole point is that going forward, leap seconds are going to get harder to deal with. Especially in the case of a negative leap second, which seems like a more "true" y2k-like scenario.


The difference is that replacing usage of UTC with TAI is a voluntary choice made for each program, but redefining UTC to be a fixed offset relative to TAI, which is effectively just redefining UTC to be TAI, is a forced change on everything everywhere all at once that everybody has to handle because one of their dependencies changed.

It would be like silently changing the start of unix epoch time to 1800 instead of adding a new “Unix time since 1800” and asking people to switch.


Not at all. Everybody using UTC would just not need to deal with leap seconds anymore. A UTC second is the same as a TAI second. It's a no-op for the vast majority of UTC users. UTC will just drift slightly more from UT1.

This change only affects people who need UTC to be close to UT1 and also somehow don't know what UT1 is.


Sure, everybody using UTC when they actually want TAI would be a no-op, but then you irreversibly break everybody who actually wants UTC and assumed that UTC would not change meanings.

The people who would be unaffected by the redefinition can already just trivially switch manually (as we already assumed that just redefining things under them would work), leaving the UTC people alone. There is no good reason to silently break all programs carefully designed to use UTC correctly to fix all of the programs haphazardly written by people who did not know what they were doing and used UTC when they actually wanted TAI. Especially since fixing the wrong use of UTC is so trivial that we assume it can be done with no modification.


‘Programs carefully designed to use UTC’ would only irreversibly break by very slowly becoming out of sync with the rotation of the earth.

A few applications should switch standards, the question is whether solar concerned applications should switch to UT1, or continuity concerned applications should switch to TAI. The former is simpler, easier, cheaper, and only causes unexpected behavior (quite slowly), NOT systematic failure.


>IMO it is better to guarantee that time always moves forward rather than trying to match computer time to human time.

Not sure if you're playing Cunningham's Law or if you don't know this was the line of thought until everything was so far out of touch with reality, 10 days of time never existed, and official records were kept with dual-dates.

https://en.wikipedia.org/wiki/Old_Style_and_New_Style_dates

https://www.timeanddate.com/calendar/julian-gregorian-switch...


> However, the code was so poorly written that the system would lose data during the double 1 am window that occurs during daylight savings time shift. > [...] > Many people in this thread are treating this like some sort of display of incompetence, but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right.

Your example only speaks for the incompetence argument.

In reality, times and dates are really complicated. Luckily, the engineers at Facebook, Reddit, and Clouflare are being paid hundreds of thousands of dollars to show off their expertise. Is it that much to ask for them to read into details like leap seconds?


It is too much. I was Google SRE and there is an internal meme showing a time series graph jumping backwards during the double 1am at DST. These mistakes happen everywhere and are best avoided by a system that doesn't allow them to happen in the first place.


So advocates of memory safe (or even high level, period) programming languages are just showing off their incompetence in your book?

Would you say to an advocate of C (much less ... rust): Look man, real programmers write in boolean circuits. Programming is hard, sure, but the engineers at Facebook, Reddit, and Clouflare are being paid hundreds of thousands of dollars to show off their expertise. Is it that much to ask for them to read into details multiplication circuits?

:)

Leapseconds causing widespread failures isn't a hypothetical, just like buffer overflows aren't. Yet, in theory, with perfectly competent development ...

Yet even with perfect competence leapseconds are still pretty gnarly: They require systems have a trustworthy and consistent source of the list of leapseconds. ... and they mean that you fundamentally cannot predict the amount of time between two UTC timestamps when one or more of them is more than 6 months in the future... and no amount of competence can fix that.


> Hospitals would just shut off all of their computers during this time to deal with it.

FWIW, there are many things that deal with leap seconds that way too. Too much risk of ending up in a difficult to fix or silently corrupt state, while coming up from a reboot is highly tested and known to work.

The cost of leapseconds is quite significant.

> but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right.

Good odds that even if someone has that they got it wrong and don't know-- especially when it comes to leapseconds as they're fairly hard to test esp. with distributed systems and infrequent enough that you may not realize the cause even when you've suffered from an issue.


If one is relying on time of all actors in a distributed system to be perfectly in sync, you already have a bug, leap seconds or not. (unless you are Google Spanner)

For timers within a single system, use monotonic clock of your own cpu.


Bear in mind nearly all of the people receiving letters should be <65. I quickly skimmed the paper and it seems like the authors don't have enough data to come up with a quality adjusted life-year type of statistic like the NHS uses.


I disagree with this definition. We have yet to produce a perfect model of the world (aka, a theory of everything). All models produced by "science" thus far are "wrong", at least on some level (ex. Newton's model doesn't cover relativity). I think "Creating models with predictive power is also a precise definition of science." is a fair description.


I think it's fair to say that a "theory of everything" is sort of the great work of any particular field of science. In practice that means refining models, but the model-building is ancillary to the truth-finding, not the other way around. Of course, if the truth wasn't predictive we're all just screwed, but that doesn't mean that whatever is predictive is necessarily the truth. It just means we might all be screwed.


This comment is incredibly out of touch with the world of big law. Not only do associates get cut, but many people stall out and can't make partner.

(also the whole concept of "Up or out" comes from Big Law/Consulting... https://en.wikipedia.org/wiki/Up_or_out)

You are just as "set for life" in big tech as big law. In fact, if you're looking at the last decade, big tech won hard considering tech stocks and how big law froze (and even slightly cut) salaries during the great recession.


> That's why we use double reviews.

We being Europeans. Double reads are not standard in the majority of the world, including the US.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: