This is a nicely-designed study. For decades, we've known that inflammation is a risk factor for heart attacks.
In this study, the researchers designed a custom antibody that binds to oral bacteria. Then they used histological staining to identify specific biofilm structures inside the atherosclerotic tissue. Bacteria released from the biofilm were observed in heart attack cases, which gives us evidence that when the body's immune system responded to these bacteria, it triggered inflammation which ruptured cholesterol-laden plaque. So now we have more insight into the mechanism behind why inflammation is associated with heart attack risk.
The "pantheon" of risk factors for heart disease are:
* hs-CRP (inflammation): the mechanism studied by this research. High inflammation roughly doubles your risk of heart disease.
* ApoB - 20% of people with normal cholesterol will have abnormal ApoB, and be at risk of heart disease (ApoB is a structural protein in lipoproteins which cause arterial plaque).
* Lp(a) - the strongest hereditary risk factor for heart disease (Lp(a) acts as a multiplier on ApoB, since it camouflages cholesterol particles from your liver)
* HbA1c - insulin resistance /diabetes is a risk factor for just about everything.
* eGFR - estimates the volume of liquid your kidneys can filter, and is an input to the latest heart disease risk models (PREVENT).
In my opinion, the newer Atkinson Hyperlegible (Next) looks easier to read long term. Maybe B612 is 'better' when you have to read just a few words on a screen, but I've switched to Hyperlegible recently for ebooks and have enjoyed it.
It is actually more complicated than that and being forced to provide a passcode is also legal precedent, but both biometrics and passcodes are still open legal questions.
It has been argued successfully that giving biometrics is analogous to giving blood, hair, fingerprints, standing in a lineup, providing a writing sample, or wearing certain clothes, all of which you can be compelled to do.
From my understanding, the current split about being compelled to provide passcodes, and to a much lesser extent biometrics, is the foregone conclusion exception stemming from the Fisher v. United States [1] case, where, as Justice White said “the existence and locations of the papers[were] a foregone conclusion and the [defendant’s physical act] adds little or nothing to the sum total of the Government’s information by conceding that he in fact has the papers… [And so] no constitutional rights [were] touched. The question [was] not of testimony but of surrender.”
This has been used in relation to court cases on biometrics and passcodes [2]. It appears that courts that rule that you can be compelled seem to look narrowly at the passcode itself i.e. the government knows you own the phone and knows you know how to unlock it, so it is a foregone conclusion to provide it. Courts that rule you cannot be compelled seem to look at the phones contents i.e. the government does not know what is on the phone so decrypting the data would be providing protected testimony, or a stricter interpretation that you cannot be compelled to disclose the contents of the mind. The Supreme Court has declined multiple times to hear cases that would help settle the legal ambiguity, so it remains an evolving issue.
In short, a passcode is not a panacea and you may be compelled to provide it.
The crawler companies just do not give a shit. They're running these crawl jobs because they can, the methodology is worthless, the data will be worthless, but they have so much computing resources relative to developer resources that it costs them more to figure out that the crawl is worthless and figure out what isn't worthless, than it does to just do the crawl and throw away the worthless data at the end and then crawl again. Meanwhile they perform the internet's most widespread DDoS (which is against the CFAA btw so if they caused actual damages to you, try suing them). I don't personally take an issue with web crawling as a concept (how else would search engines work? oh, they don't work any more anyway) but the implementation is obviously a failure.
---
I've noticed one crawling my copy of Gitea for the last few months - fetching every combination of https://server/commithash/filepath. My server isn't overloaded by this. It filled up the disk space by generating every possible snapshot, but I count that as a bug in Gitea, not an attack by the crawler. Still, the situation is very dumb, so I set my reverse proxy to feed it a variant of the Wikipedia home page on every AI crawler request for the last few days. The variation has several sections replaced with nonsense, both AI-generated and not. You can see it here: https://git.immibis.com/gptblock.html
I just checked, and they're still crawling, and they've gone 3 layers deep into the image tags of the page. Since every URL returns that page if you have the wrong user-agent, so do the images, but they happen to be in a relative path so I know how many layers deep they're looking.
Interestingly, if you ask ChatGPT to evaluate this page (GPT interactive page fetches are not blocked) it says it's a fake Wikipedia. You'd think they could use their own technology to evaluate pages.
---
nginx rules for your convenience - be prepared to adjust the filters according to the actual traffic you see in your logs
I love these incomprehensible magic number optimizations. Every time I see one I wonder how many optimizations like this we missed back in the old days when we were writing all our inner loops in assembly?
- GIF: gifski (squeeze out the best possible out of the ancient format)
- PNG: pngquant (lossy but respectful of what many PNGs are: lots of flat surfaces + limited color palette, transparency)
- JPG: jpegli (writing this post mostly to share the news about this one, it's new from libjxl learnings, approximately as good qualitatively as Guetzli, but 1000x faster -actual number, not an exaggeration-. It's amazing! https://github.com/libjxl/libjxl/tree/main/lib/jpegli )
Below's my lil' `image-opt` helper making for a nice workflow to optimize any gif/jpg/png file(s) :)
#!/usr/bin/env bash
set -euo pipefail
dbg() { if [ "${DEBUG:=false}" = 'true' ]; then echo "$@"; fi }
die() { echo "Error: $*" 1>&2; zenity --error --text="$*"; exit 1; }
if ! command -v pngquant > /dev/null; then die 'You must install pngquant'; fi
if ! command -v cjpegli > /dev/null; then die 'You must install cjpegli'; fi
if ! command -v gifsicle > /dev/null; then die 'You must install gifsicle'; fi
for filename in "${@:1}"; do
if [ -d "$filename" ]; then
echo && echo "ℹ File '$filename' is a directory. Skipping."
continue
fi
if [ ! -f "$filename" ]; then
echo && echo "ℹ File '$filename' does not exist. Skipping."
continue
fi
extension_raw="${filename##*.}"
extension="${extension_raw,,}" # Convert to lowercase
if [[ "$extension" != 'jpg' && "$extension" != 'png' && "$extension" != 'gif' ]]; then
echo && echo "ℹ Skipping file $filename due to unsupported extension: .$extension"
continue
fi
filename_noext="${filename%.*}"
filename_original="${filename_noext}_beforeOpt.${extension_raw}"
filename_final="${filename_noext}.${extension}"
dbg "filename($filename), filename_noext($filename_noext), extension($extension), filename_original($filename_original)"
echo && echo " Compressing $filename ..."
mv "$filename" "$filename_original"
if [ "$extension" == 'jpg' ]; then
cjpegli "$filename_original" "${filename_noext}_q50.${extension}" -q 50
cjpegli "$filename_original" "${filename_noext}_q60.${extension}" -q 60
cjpegli "$filename_original" "${filename_noext}_q70.${extension}" -q 70
cjpegli "$filename_original" "${filename_noext}_q80.${extension}" -q 80
cjpegli "$filename_original" "${filename_noext}_q90.${extension}" -q 90
elif [ "$extension" == 'png' ]; then
pngquant --strip --force --verbose --output "$filename_final" "$filename_original"
elif [ "$extension" == 'gif' ]; then
# https://kornel.ski/lossygif , set --lossy to 20 for light compression, 200 for heavy
gifsicle -O3 --lossy=50 --verbose --output "$filename_final" "$filename_original"
fi
gio trash "$filename_original"
done
sync
It's super easy to run LLMs and Stable Diffusion locally -- and it'll do what you ask without lecturing you.
If you have a beefy machine (like a Mac Studio) your local LLMs will likely run faster than OpenAI or Gemini. And you get to choose what models work best for you.
Check out LM Studio which makes it super easy to run LLMs locally. AUTOMATIC1111 makes it simple to run Stable Diffusion locally. I highly recommend both.
Lukewarm isn't a good candidate for an antonym, because it describes a specific temperature. Not an exact specific temperature, one can't mark "lukewarm" on a thermometer, but consider being presented with two bowls of soup and asked which is "more lukewarm". Either neither is, one is and the other isn't, or both are, lukewarm. In the last case, how do you decide which is more so? Is it the slightly warmer one, or the slightly cooler one?
It's like with cooking steak: you could say well-done is the antonym of rare, I'd agree[†] with that as a premise. But medium doesn't have an antonym, and it makes less sense to describe the antonym of medium-rare as medium-well, they aren't opposites.
[†] If you would prefer "blue" or even "raw" for maximum contrast, that's fine by me, both cool and cold are antonyms[‡] of warm, same principle.