Hacker Newsnew | past | comments | ask | show | jobs | submit | NeutralCrane's commentslogin

It’s the equivalent of someone running on a platform where there would be world peace and no hunger.

That’s great and all as an ideal but realistically impossible so if you don’t have anything more substantial to offer then you aren’t really worth taking seriously.


Ironically, some of the worst tech debt I’ve ever dealt with has been because the initial implementation was an overengineered disaster by an dev who thought they were solving all possible problems before we really understood what all possible problems are.

“Zero tech debt” is an impossibility. The most elegant solutions incur some kind of tech debt, it’s just less than others. More realistic than “zero tech debt” is a continuing dedication to addressing tech debt combined with using implementations that minimize “one way doors”.


Are there other alternatives you have been looking at? I’m just getting started looking at these LLM gateways. I was under the impression that LiteLLM was pretty popular but you are not the only one here with negative things to say about it.

I am planning to try any-llm-gateway that this post is about. We don't need anything fancy, so it seems that this might cover our needs.

We are just now looking into LLM Gateways and LiteLLM was one I was considering looking into. I’m curious to hear more about what makes the code quality garbage.

I've deployed LiteLLM proxy in a number of locations and we're looking to swap it out (probably to Bifrost), we've seen many bugs with it that never should have made it to a release. Most stem from poor code quality or what I'd classify as poor development practises. It's also slow, it doesn't scale well and adds a lot of latency.

Bugs include but are not limited to multiple ways budget limits aren't enforced, parameter handling issues, configuration / state mismatches etc...

What makes this worse is if you come to the devs with the problem, a solution and even a PR it's very difficult to get them to understand or action it - let alone see critical things like major budget blowouts as a priority.


What about forking it for your own use? Not worth it for the bugs you had fixes for?

Not worth the technical debt and architecture of the codebase. To be honest I'd sooner completely rewrite it in Golang/Rust or otherwise.

How do you like bugs where tools are not working, but only for Ollama provider and only when streaming is enabled? This is one of the real instances I had to debug with LiteLLM.

I personally had no issues using the client libs, my only complaint was that they only offer official Python ones would love to see them publish a typescript one

That doesn’t say anything about pushing people away from using products with AI though. People are enormously negative about the effects of social media, and yet social media use is incredibly pervasive and sticky.

I thought that the loss of trust would be self-explanatory in that regard? You don't buy stuff you don't trust, yes?

Anyways: https://futurism.com/the-byte/study-consumers-turned-off-pro...

Researchers have found that including the words “artificial intelligence” in product marketing is a major turn-off for consumers, suggesting a growing backlash and disillusionment with the tech — and that startups trying to cram “AI” into their product are actually making a grave error.


My point is that what people say and what people do are not the same thing. It may sound self-explanatory that if people don’t trust AI, they will avoid AI products, but I’m interested in data proving this. Self-reported attitudes regarding AI are not the same as customers actively avoiding products using AI.

I agree with your observation re. what people say/do. However, you know just as well as I do, that there's never studies/data of people avoiding stuff. How would you even go about proving a negative? So, let's turn this around: can you show me data that confirms people are enthusiastic to buy AI enhanced things? Data that confirm people's widespread acceptance and/or even preference of AI enhanced commodities?

There is no need for us random civilians to know the truth of these matters. Employees inside the company can see analytics that show whether the features are working or not.

If it competes with a PS5, but runs my Steam Library, it’s automatically won IMO.


> All that means is the companies charge everyone else more to subsidize the cost of those who are more at risk.

Literally the purpose of insurance


Not private insurance. We all pay into it obviously, but our individual rates in an insurance market are based on individual risk. My rate is only impacted by others in a relative sense, if I'm more risky than someone else I pay more.

With preexisting conditions off the table, my rates may go up only because someone else is a higher risk and the insurance company can't charge them for it.

I said this in another post, but morally I don't want others to be denied health care. I don't want health insurance at all in that case because insurance implies that you pay more for riskier coverage.


Huh?

The purpose of insurance is to mitigate the risk of a very costly but unlikely outcome by paying a smaller amount over time, thereby spreading that risk among those of similar risk.

Not being able to consider individual risk means that insurance makes no sense for those with a low risk profiles, because they’re in the same cohort as those who will _definitely_ file claims.


Cohorts are based off of your employer, because we, inexplicably, tied health insurance to your employer. If you work for a very young and hip company then no, your cohort might not file claims.

There's levels of broken-ness to healthcare in the US. Even if you allow health insurance to discriminate based on health conditions, it will still be broken in other ways.


That's one way, true. I've mostly been considering the ACA here and those getting coverage that can't get it through an employer.

Employer health insurance rates fan still get wonky for small businesses though. It probably can't happen today, but I was at a small business where everyone's rates went up shortly after one person was diagnosed with cancer and another one or two with diabetes.

That is an example of it not really being individual insurance though. The insurance company is just lumping the employees together and setting rates based on the relative risk of the whole group, not dissimilar from getting an individual policy where the rates are based on a group of one.


How do you propose we address adverse selection in insurance markets then? That's the part you're overlooking and making you go "Huh?". It's clear to everyone else.


You don't.

Health insurance is intended to mitigate the risk of unexpected high costs, not pay for your normal healthcare.

You're thinking of a healthcare _plan_. Trying to make the insurance model fit where it doesn't work is the root of the vast majority of our issues in the US.


Fascinating. My friends and I were obsessed with stickpage.com in the early to mid 2000s, where Xiao Xiao and a bunch of imitators and derivatives were hosted.

I kind of miss that era of the internet where there were random, niche sites you would fall in love with and it wasn’t all just a post on YouTube or Facebook or Reddit or something.


I’m not a data engineer but work in an adjacent role. Is there anyone here who could dumb the use case down? Maybe an example of a problem this solves. I am struggling to understand the value proposition here.


> Maybe an example of a problem this solves.

Some service writes a lot of data in parquet files stored on S3 (e.g. logs), and now you want that data to be queryable from your application as if it was in postgres (e.g. near real-time analytics dashboard). pg_lake allows you to load these parquet files into postgres and query the data. You can also join that data with existing tables in postgres.


Been experimenting with OpenTelemetry->Parquet conversion lately for logs, metrics, and traces. Lots of related projects popping up in this area. It's powerful and cheap.

- https://github.com/smithclay/otlp2parquet (shameless plug, based on Clickhouse's Otel schema) - https://github.com/Mooncake-Labs/moonlink (also has OTLP support) - https://github.com/open-telemetry/otel-arrow (official community project under early dev)


I guess my confusion is that there already are ways to query this data with DuckDB or something like that. So is the magic here that it’s Postgres? What makes being able to query something in Postgres special? And when we say it’s now queryable by Postgres, does this mean that it takes that data and stores it in your PG db? Or it remains in S3 and this is a translation layer for querying with PG?


Not sure if I have this right but this is how I understand it

> So is the magic here that it's Postgres? What makes being able to query something in Postgres special?

There are a bunch of pros and cons to using Postgres vs. DuckDB. The basic difference is OLTP vs. OLAP. It seems pg_lake aims to give you the best of both. You can combine analytics queries with transactional queries.

pg_lake also stores and manages the Iceberg catalog. If you use DuckDB you'll need to have an external catalog to get the same guarantees.

I think if you're someone who was happy using Postgres, but had to explore alternatives like DuckDB because Postgres couldn't meet your OLAP needs, a solution like pg_lake would make your life a lot simpler. Instead of deploying a whole new OLAP system, you basically just install this extension and create the tables you want OLAP performance from with `create table ... using iceberg`

> when we say it’s now queryable by Postgres, does this mean that it takes that data and stores it in your PG db?

Postgres basically stores pointers to the data in S3. These pointers are in the Iceberg catalog that pg_lake manages. The tables managed by pg_lake are special tables defined with `create table ... using iceberg` which stores the data in Iceberg/Parquet files on S3 and executes queries partially with the DuckDB engine and partially with the Postgres engine.

It looks like there is good support for copying between the Iceberg/DuckDB/Parquet world and the traditional Postgres world.

> Or it remains in S3 and this is a translation layer for querying with PG?

Yes I think that's right -- things stay in S3 and there is a translation layer so Postgres can use DuckDB to interact with the Iceberg tables on S3. If you're updating a table created with `create table ... using iceberg`, I think all the data remains in S3 and is stored in Parquet files, safely/transactionally managed via the Iceberg format.

https://github.com/Snowflake-Labs/pg_lake/blob/main/docs/ice...


I feel like if Python’s lack of typing is going to be considered a drawback, then the solution needs to be true typing. Rust’s strict typing is an advantage, not a drawback.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: