Hacker Newsnew | past | comments | ask | show | jobs | submit | leonseled's favoriteslogin

For scripting... HIGHLY recommend putting your dependencies inline.

E.g.:

  #!/usr/bin/env python3
  # /// script
  # requires-python = ">=3.11"
  # dependencies = [
  #     "psycopg2-binary",
  #     "pyyaml",
  # ]
  # ///
Then -

  uv run -s file.py

I think people / the market have gotten a little too excited about something AI is actually pretty bad at - making changes to existing code (which is, after all, most of the code).

AI software devs don’t understand requirements well, and they don’t write code that confirms to established architecture practices. They will happily create redundant functions and routes and tables, in order to deliver “working code”.

So AI coding is bunk? No, it’s just that the primary value lies elsewhere than code generation. You can stuff A LOT of context into an LLM and ask it to explain how the system works. You can ask it to create a design that emulates existing patterns. You can feed it a diff of code and ask it to look for common problems and anti-patterns. You can ask it create custom diagrams, documentation and descriptions and it will do so quickly and accurately.

These are all use cases that assist with coding, yet don’t involve actually writing code. They make developers more knowledgeable and they assist with decision making and situational awareness. They reduce tedium and drudgery without turning developers into mindless “clickers of the Tab key”.


The ability to use lenses with custom assistants is the killer feature IMO.

Want to search for open source projects that implement some algorithm? Create a Github/Gitlab lens. Want to ask it questions only about some framework? Add it’s domain to a lens.


For many, many years, I've adopted the OP's choice of style, using what I call "FIRST/NEXT" comments to divide the function into paragraphs:

    // FIRST, Create the pizza object
    ...
    // NEXT, Add the toppings
    ...
    // NEXT, Heat the oven
    ...
By all means, move a "paragraph" into its own function if it's called more than once; but otherwise this provides a number of useful features:

* The FIRST/NEXT comments serve as useful headers, making it possible to navigate the function without reading the code in detail.

* I know that no one's going to call one of the blocks from outside.

* I can see at a glance what chunks of code go together.

I've often gone back and read code I wrote five, ten, twenty, thirty years ago using this method, and found it perfectly readable.


Yep. This app uses 230mb of RAM on my machine compared to Spotify that uses 208mb. But it's definitely more performant than my hideously slow Electron Spotify client. I'm really done with Electron. I hope this shaming of Electron apps continue because I can't stand this degradation of software. The only Electron app of recent that had good performance is Notion Calendar (used to be Cron). Although, Notion itself is painfully slow. This is why I'm building a Notion alternative in Qt C++ and QML[1].

[1] https://www.get-plume.com/

EDIT: Is the app down? It doesn't load the "Browse" content for me.


Helix is cool, but it doesn't have Vim keybinds, which is unfortunate. I find its own system to be a little worse than Vim's (see my other comment about it in this thread).

Alternatives: zoxide, autojump, fasd, and probably couple other

I don’t use Polars directly, but instead I use it as a materialization format in my DuckDB workflows.

Duckdb.query(sql).pl() is much faster than duckdb.query(sql).df(). It’s zero copy to Polars and happens instantaneously while Pandas takes quite a while if the DataFrame is big. And you can manipulate it like a Pandas DataFrame (albeit with slightly different syntax).

It’s greater for working with big datasets.


Tools like Dagster, dbt, and AWS Glue always strike me as built for BI/in-house analytics over user & app-generated data.

What do people use for ingesting data from end-users, their transactional CRUD to it too, and then OLAP queries triggered (not written) by and for those same end users?


This single file shows the entire set of OT transformations (retain, insert, delete):

https://github.com/Operational-Transformation/ot.js/blob/mas...

and this is a good post outlining the basics of OT, from the creator of CodeMirror:

https://marijnhaverbeke.nl/blog/collaborative-editing-cm.htm...


The real magic is this query here (which I got via Twitter after posting this article):

    SELECT
        SUM(size) AS size
    FROM read_parquet(
        list_transform(
            generate_series(0, 55),
            n -> 'https://huggingface.co/datasets/vivym/midjourney-messages/resolve/main/data/' ||
                format('{:06d}', n) || '.parquet'
        )
    );
Turns out DuckDB has a lambda function feature these days! The n -> syntax.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: