Hacker News new | past | comments | ask | show | jobs | submit | jruohonen's comments login

"Only five of the 15 titles on the list are real."

Over again.



This is a pretty cool take. Note that there would be potential also on the vice versa side (e.g., inferences from uploaded graffiti photos).


"the grandest engineering project of all time"


"In the cyber security industry, however, marketing is everything. Names are chosen to invoke a visceral reaction and to promote fear. That fear helps to turn people towards expensive high-tech security products."

"Often, the high-tech services that the cyber security sector sells protect the front door, while offenders continue to sneak in the back one using low-tech methods."


I too can quote using copy-paste.


It looks interesting, sure, but please do not link these (Research Gate, Academia.edu, etc.) platforms here.


OK, I guess - why?


No forceful reasons (i.e., do what you want), but they're basically social media style parasites for people who haven't yet discovered pre-print repositories.


"Typically, in 97% of the cases the OP does not change his mind but in about 3% of the cases he does."

Another way to put the "someone is wrong in the Internet" thing.

"This brings us closer to my central argument: the AI's success was'’t due to superior reasoning or a reliance on facts and logic, but rather because it effectively 'hacked' the mechanics of persuasion."

Indeed, and that's what many people here too have been saying for a long time. Though note that with 3% it is all questionable to say the least.


"So, what other solutions can we invent?"

I suppose containers or Ubuntu's snaps might be examples (for good or bad).


Packages may support compilation flags, but (realistically) consumers don't want to bother, and releasers are not obligated.

Instead, AI creates ed(1) scripts that refactor the code. Not only the rewritten code; the transform must be human readable.

These codemods modify library source at buildtime. If you don't want UI, they get edited out or swapped in as no-ops.

Maybe we could monetize code edits. However, each successive edit has to reward preceding ones. Like a chip wafer, each step makes it more valuable.

How to do that?


"We were only able to find released code for 6 of these 100 papers, and what's worse, only 6 of the 88 remaining papers contained a specific listing for the algorithm."


"Specifically, they want to focus more on testing validity, which for quantitative social scientists refers to how well a given questionnaire measures what it’s claiming to measure—and, more fundamentally, whether what it is measuring has a coherent definition."

Ref.:

https://news.ycombinator.com/item?id=43933962

https://news.ycombinator.com/item?id=43927550


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: