Hacker News new | past | comments | ask | show | jobs | submit | sorenjan's comments login

I have an 8 core CPU and a 10 TFLOP GPU and the cursor on this site lags in Firefox but is noticeable smoother in Edge.

It's usable (as in smooth-ish) but still visibly laggy compared to the normal mouse, in Chrome, on my hefty M1 Pro macbook for work.

Also... I fucking hate it. I don't want my mouse to stick to buttons, or to change colors constantly.

The "emotion" the this site generated for me was "anger". If this is the pitch for the new design system, my journey of not using Material is coming to a middle.


What did they use for placebo? Talking to somebody without education, or not talking to anybody at all?

Not talking to anyone at all.

What did they do then? If they didn't do anything, how can it be considered a placebo?

It's called a "waitlist" control group, and it's not intended to represent placebo. Or at least, it shouldn't be billed that way. It's not an ideal study design, but it's common enough that you could use it to compare one therapy to another based on their results vs a waitlist control. Placebo control for psychotherapy is tricky and more expensive, and can be hard to get the funding to do it properly.

Does it matter? The point is AI made it worse.

That seems like a very poor control group.

That is one of my concerns.

You might want to extract the location from the image exif data and include in the prompt as well. There are reverse geocoding libraries and services that takes coordinates and return a city, which would probably make for a better summary of a trip.

I don't manage any Blender renders, so this isn't for me, but if you want to use Python for the TUI part as well you can use Textual [0].

A lot of the Textual based programs[1] can be run without having to install them with uvx, i.e.:

  uvx --python 3.12 textual-demo
  uvx usolitaire
[0] https://github.com/Textualize/textual

[1] https://github.com/oleksis/awesome-textualize-projects


The Blender group already has batching job queue managers called Flamenco:

https://flamenco.blender.org/

The client is cross platform, and only requires a network share.

It may be wise to add any features you need to the project... =3


Cannot speak highly enough of the projects released by the fine folks at Textualize thus far, starting with Wil McGugan's "Rich" and everything that they've built around that fantastic "running start". It's truly revolutionized how text / console UI is done in Python. :)


No.

"The peoples of Europe are fair-skinned and reddish, because they live in a cold climate and are not scorched by the sun."

Source: Hippocrates, On Airs, Waters, and Places, 5th century BC.

"The physical characteristics of the Germans are consistent: blue eyes, reddish hair, and large bodies."

(Tacitus, Germania, chapter 4)

Egyptian tomb paintings from the New Kingdom (c. 1550–1070 BC) visually represent foreign peoples with distinct skin tones - "Europeans", in the form of Sea Peoples or early Aegean peoples (e.g., Minoans or Mycenaeans), were sometimes shown with light red or pale skin tones.


The agricultural revolution was roughly 10,000 BC, wasn't it? Your sources are more recent than that, and so don't disagree with the point the parent post was making.

According to [1] more like 6000-4000 BC for Europe. Granted, that's still before 1700BC.

From a cursory study of Wikipedia my rough summary would be: Europe used to be roughly divided in the "Western Hunter Gatherers" (WHG) and "Eastern Hunter Gatherers" (EHG). The WHG typically had dark skin, dark hair and blue eyes, the EHG were typically light skinned with brown eyes. Blond hair may have originated from EHGs in North Eurasia and spread from there. Around 6000 BC farmers from Anatolia (~modern Turkey) started moving into Europe, the EEF (Early European Farmers). Those were typically smaller than European hunter-gatherers, light skinned and dark haired. They migrated North, partially replacing the EHG and WHG, partially mixing with them, and in some places the EHG and WHG simply took up farming. But Easter Europe is less amenable to farming, meaning the dark-skinned WHG diminished the most while the light-skinned EHG and EEF became the dominant groups in Europe's genetic diversity

1: https://en.wikipedia.org/wiki/Early_European_Farmers#/media/...


Yes, there was an agricultural revolution during the neolithic times. We have evidence this for the Southwest Asia aka Middle East, Asia around the Yangtze and Yellow rivers, South, Central and North Americas.

I am open to this hypothesis (conjecture?), it just lacks supportive evidence. On the other hand, we have ample evidence that agricultural revolution did not "turn people white" in the other regions.


There are extensions that does the same thing, like this one: https://addons.mozilla.org/en-US/firefox/addon/panorama-view...


That's addressed in the paper:

  Open access to data and code
  Question: Does the paper provide open access to the data and code, with sufficient instruc-
  tions to faithfully reproduce the main experimental results, as described in supplemental
  material?
  Answer: [No]
  Justification: Although we have answered “No” for now, we intend to release the code and
  models to enable the reproducibility of our main experimental results, pending approval
  from the legal department. This temporary status reflects our commitment to open access
  once all necessary permissions are secured.


The paper was released a few months ago for context.


These results look incredible, and with an inference time of only 36 ms for a 4X super resolution on a V100.


They should make a temporally coherent version of CosAE to replace this: https://blogs.nvidia.com/blog/rtx-video-super-resolution/


BMW has a page describing the use of UWB (Ultra Wide Bandwidth) radio in key fobs and how it helps against relay attacks. In short it's because the wide bandwidth allows for very short pulses which lets them measure the distance between the car and the key, and using a relay will inevitably add distance and therefore time between the signal is sent and the reply is received.

https://www.bmw.com/en/innovation/bmw-digital-key-plus-ultra...


I believe this was ratified into a standard so it should show up in more new cars. https://carconnectivity.org/car-connectivity-consortium-publ...


The core problem is that older systems never proved distance in any rigorous sense, they only proved connectivity/liveness. Pretending that you're closer than you are is sometimes called in research "the mafia fraud attack".


I don't think these two statements are fully compatible:

> Do any of the following apply to you?

> - You want to avoid hard-coding paths

> ...

> If so, you’ll benefit from a CMake-like build system.

and

> Don’t GLOB files

I find it very annoying that I, a human, am expected to keep a current list of all source code files, when listing files is something that computers are very good at. They even tell us how to structure the project with a src folder, but I still have to remember to add or remove source code files to the right CMakeLists.txt when the content of that folder changes. It's poor design.


Blanket advice always has edge cases. Re. whether GLOB'ing is good or bad, it pushes all the structure to your filesystem.

This is fine if your directories exactly map to binaries/libs/what-have-you, but what if you want unit tests for functions in each source file? Should you compile those with your binary? You could move them all into a tests folder and have a bin and a tests, but what if you want a test binary per source file (maybe your tests erratically take out their host process?)

The bottom line is that there has to be structure _somewhere_. If you're leaning heavily into CMake and your project isn't trivial, some of that structure may as well go into the build system.


DRY applies here. Placing files in a directory hierarchy and/or naming them according to a strict convention is "writing it down" once; repeating that in a text config file is writing it down a second time.

A good build system should use a concise but configurable rule to decide what to build as far as possible. Whether the details of that rule are "compile every file below this directory" or "compile every file that matches *.cpp but not *Test.cpp" (or some combination, or similar) isn't important.

Then it's the programmer's responsibility to "write the info" correctly by strictly conforming to that rule in how they name and place files, and/or tweaking the rule if necessary (e.g., "... But don't compile any file that has a period as the first character of its filename").


Rules can be clumsy, and information has different purposes. Typically, the generic and concrete concerns of keeping related files together and under source control is a much better match for the use of meaningful file names and directories than using the files for a specific purpose like compiling some build target; this makes globbing a cheap shortcut, replacing explicit specification of how to use files, except for special dynamic cases where file names are actually unknown.

As an example of file system and build system disagreeing, what if you start writing a new source file that you don't want to compile and link until it's ready and needed? If globbing file names is the source of truth, either the uncompilable draft of your new file has to be placed in some inconvenient alternate location where the build system doesn't pick it up, and later moved to a regular source folder, or it breaks the build until you waste enough time to hack together something that can be compiled (or even compiled and linked).

An IDE can use actual file lists and its internal project configurations to manage file references in build scripts (e.g. silently updating build scripts when files are renamed or moved or offering a good UI to add source files to build targets).


> what if you start writing a new source file that you don't want to compile and link until it's ready and needed?

What makes using a different top-level directory (which is something that would work with nearly all build systems) inconvenient? But if you really want these not-to-be-compiled-yet files nearby in the directory tree, customise the rule to, e.g., not compile anything having a filename beginning with IGNORE, or in a subdirectory called IGNORE, or (if you want to get fancy) listed in a .compileignore file (a la .gitignore). If the number of exceptions is small, then explicitly marking them in any of these ways is DRYer (= requires less maintenance) than explicitly recording the larger number of "business as usual" cases a second time.

> An IDE can use actual file lists and its internal project configurations

If you rely on an IDE to this extent, it's effectively part of your build system. (Which is absolutely fine, and has some benefits, though also some downsides.)

Provided that the IDE is "doing the work for you" (e.g., by adding each new .cpp file you create to the list of files to compile, until you uncheck a box somewhere that removes it), you're good. The only questions then are (1) whether this IDE configuration info is handled sanely by the VCS, and (2) whether it's readily understandable and usable to someone without the IDE (e.g., a CI build server). If the IDE stores this info in an externally defined plain-text build script like a Makefile or Maven pom.xml, then the answer to both questions is probably "yes" -- but, depending on how expressive the build script language is, it might be difficult or error-prone for the IDE to be able to update it reliably if the user makes their own changes to it as well.


All of my projects GLOB source files, headers, and test source with CONFIGURE_DEPENDS. Haven’t had a problem yet.


Yet you won't find a CMake best practices text that won't mention how bad globbing is.


There's the traditional criminal behaviour of CMake: setting a variable to a globbed set of files ONCE and obliviously caching the value ever after. A build system where it is possible cannot claim to be well designed.


True. But I don't think CMake has ever been accused to be well designed.


You don't notice it on an SSD but glob hits spinning drives hard.


For what it’s worth I think (modern) msbuild does it the best way I’ve seen so far. It operates on a “mostly” include by default. By that I mean the compilers “know” what are valid files, and everything valid in and below the project dir is included by default. You can edit the project file if you wish to include or exclude specific files or directories. Before dotnet core, you had to specify every file (ignore by default). Visual Studio would handle that for you, but adding new files often lead to version control conflicts.

Ultimately what I’m trying to say is that msbuild has a good set of default globbing rules and it’s very easy to fine tune.


I've never respected the 'don't use GLOBs' recommendation with CMake and had practically no issues. They even added CONFIGURE_DEPENDS to ease the process.

To me, the arguments against using GLOBs here seem too constructed for modern C++ developers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: