Thanks. So this has modern "s" characters, rather then characters that look like an "f" with the horizontal bar missing, which is an improvement. Still, when an "s" or "c" is followed by a "t", there's this ligature connecting it. This kind of makes sense - the sounds are different after all - but it took me a while to figure out.
Thanks for all the explanations. I always thought it was regular HTML, but now I know to watch out for the differences.
Can you say a few more words about the library https://github.com/standardebooks/tools ?
Can it generate ePub3 from markdown files or do I have to feed it HTML already. Any repo with usage examples of the `--white-label` option would be nice.
The tooling does two main things: create a valid epub3 skeleton for your content, and build your book into “compatible”, Kobo and Kindle versions. You need to supply the valid XHTML.
I've been in Grayscale for some time now (almost a month), and it's great. I always wanted to have a phone with an eInk display, and this is pretty close feeling (aesthetically).
Scrolling is no longer interesting, and food looks un-appetizing. Making the digital reality look boring is a good deal to make the real world look more exciting.
Thanks to comments from @jtbaker and @SkyPuncher I just added a shortcut to the "pull out" menu so I can now turn off when I need to work with pictures where colors are important.
The result of `Y @ X` has shape (3,), so the next line (concatenate as columns) fails.
To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.
Although, I'm not sure why the author is using `concatenate` when the more idiomatic function would be stack, so the change you suggest works and is pretty clean:
Z = Y @ X
np.stack([Z, Z], axis=1)
# array([[14, 14],
# [32, 32],
# [50, 50]])
with convention that vectors are shape (3,) instead of (3,1).
You do realize how many arbitrary words and concepts that are nowhere to be found in “conventional” math there are here, right?
I know what all of these do, but I just can’t shake the feeling that I’m constantly fighting with an actual python. Very aptly named.
I also think it’s more to do with the libraries than with the language, which I actually like the syntax of. But numpy tries to be this highly unopinionated tool that can do everything, and ends up annoying to use anywhere. Matplotlib is even worse, possibly the worst API I’ve ever used.
Yes, I was (and still am) similarly impressed with LLMs ability to understand the intent of my queries and requests.
I've tried several times to understand the "multi-head attention" mechanism that powers this understanding, but I'm yet to build a deep intuition.
Is there any research or expository papers that talk about this "understanding" aspect specifically? How could we measure understand without generation? Are there benchmarks out there specifically designed to test deep/nuanced understanding skills?
Any pointers or recommended reading would be much appreciated.
reply