Hacker Newsnew | past | comments | ask | show | jobs | submit | jsmith99's commentslogin

MS Copilot is quite useful for meeting minutes and summaries etc. Still not nearly as useful as good handwritten notes but saves loads of time.

Sure useful. But we are talking making money useful, not just nice. Like will it create 20% more revenue? 1% more? Or is it just a nice to have?

It is not so much that used diamonds are worth less (although they might decline in value without provenance to prove they are natural or if they are chipped) but the huge markup on retail jewellery. It's easy for any member of the public to buy and sell gold at close to market price, it's much harder with diamonds.


> It is not so much that used diamonds are worth less (although they might decline in value without provenance to prove they are natural or if they are chipped) but the huge markup on retail jewellery.

Precisely.

And on top of that some jewelry stores are worried that customers would consider a below wholesale offer to be insulting, so they often refuse to buy piece back at all.


TL;DR: Amazon somehow merged a malicious PR that changed the system prompt to one that would aim to delete everything, locally and in the cloud, and this got included in the release version.


Well "rm -rf /" was a little too obvious. Though at a former job that exact line of code did make it into production once. Wasn't a fun day.


What a vibe


Yes, the repo readme says the code is open source but the fee is required for using the repo's issues and releases features.


I'd be surprised if the github EULA allows you to just attach rules to who can click the releases button.

For issues and discussion, sure that's essentially moderation. But surely you can't make a EULA that says you can't click on a github provided feature unless you agree to some arbitrary third party's rules.


I just use a MCP server (with copilot or cline) that has a read only login to my database.


Which is strictly worse than just giving the LLM access to the source of truth for the database.

You're adding a round trip to the database and the LLM and inserting a tool call in the conversation before it even starts generating any code.

And the reference Postgres MCP implementation doesn't include Postgres types or materialized views, and is one of the most widely used packages: Zed.dev's MCP server for example, is seemingly just a port of it and has the same problem.


MCP also gives the LLM access to your example data, which can add clarity beyond what your schema alone provides.


I don't see how a round trip of <500ms, which is equivalent to maybe 50 tokens, is worse than including many thousands more extra tokens in the prompt, just in case they might be useful. Not to mention the context fatigue.

If designed well - by suspending generation in memory and inserting a <function_result>, without restarting generation and fetching cache from disk - the round trip/tool call is better (costs the equivalent of 50 tokens for waiting + function_result tokens).


You're dealing with the full TTFT x2 + the tokens all the prompts of all your MCPs before you even get to that round trip to the DB.

And you don't have to wonder about "if designed well": the reference implementation that's getting 20k downloads a week and getting embedded in downstream editors is is not designed well and will make the round trip every time and still not give the LLM the full information of the table.

Most MCP implementations are crappy half-assed implementations in similar fashion because everyone was rushing to post how they added <insert DB/API/Data Source> to MCP.

And if you're worried about "context fatigue" (you mean LLMs getting distracted by relevant information...), you should 100% prefer a well known schema format to N MCP prompt definitions with tool usage instructions that weren't even necessarily tuned for the LLM in question.

LLMs are much more easily derailed by the addition of extra tools and having to reason about when to call them and the results of calling them, than they are a prompt caching friendly block of tokens with easy to follow meaning.


The schema in the db should be the source of truth and an MCP server like that is the most flexible, can work with any ORM set up


What source of truth? If you have access to the database then you have the actual truth right there.


Out of interest.. does the resultant data get used by the LLM or just generating SQL, executing and returning separately?


PM on the project here - The results from the query are generally not used by the LLM. In agent mode though, during query planning, the agent may retrieve sample of the data to improve precision of the queries. For example, getting distinct values from dimensional table to resolve filter condition from natural language statement.


Thanks. I worry about these kind of tools connecting to production databases.. Especially considering how easy it is to switch out LLM endpoints, where that data is going, how it is retained, the context etc becomes a bit of a privacy nightmare..


Absolutely valid concern. Our extension connects to LLMs through Github Copilot. Github Copilot is Microsoft product and offers variety of enterprise plans, which enables your IT to approve what can be used for what kind of data. This gives you a clear path towards compliance with your enterprise requirements.


Makes sense. Appreciate the responses. Honestly though, as a person outside the US, I'm removing my dependence on US company IT tools and infrastructure, GitHub, VSCode, AWS etc, enterprise or otherwise.. Congrats on the project though.


I strongly recommend the apps you mentioned, but in the author's case they wanted to keep their music in iCloud.


There's a dedicated settings page for quickly setting popular dev settings such as showing extensions and full paths. Getting rid of the rest just involves tweaking a few other settings like don't show tips or welcome screen. I also hide the weather and news widget because it's tabloid rubbish but many people seem to love it.


I'm also using RR7 and Gemini 2.5 Pro just refused to believe me that I could import Link from react-router. Just ignored my instructions and went down a rabbit hold in copilot agent mode, deeper and deeper, trying every possible package name (none of which were installed). I've now created a copilot instructions file into which I've copied most of the RR7 migration docs.


I've also used pagedjs for a relatively complex booklet with bidirectional text in different languages, images and long footnotes. The result was great but there were some annoying bugs, some of them seeming to be possible underlying bugs in chrome and Firefox. Still, latex would have been even more frustrating.


Coincidentally, I've also used pagedjs for a project recently (125K novel) and encountered some bugs/minor issues. Overall though, I would say I had an immensely positive experience (because even when stuff broke, it was still just HTML, CSS, and JS--so I, like any other web developer, could fix it).

That said, it's a shame that the relevant W3C specs (see https://pagedjs.org/about/) still aren't fully supported by browsers (but perhaps such is the fate of niche features), but with that being the case, I'm infinitely thankful that pagedjs exists as a polyfill.


Oh, I certainly don't doubt that. And as I said, I haven't really found Paged.js all that frustrating! I have extensive though not recent Pagemaker experience; I expected InDesign to be easier, and now I rue the day when that's where I'm forced to resort.

In my experience Paged.js is at its best when building to PDF, but then that's always my intermediate format when working to paper, because that's where PDF's inflexibility shines. The source of a book block, everything that builds to that PDF, partakes of all the infelicities of the JS ecosystem. But to remake the book itself again, all I need do to start is print the PDF.


Very nice colours. However I am mildly red-green colour blind like 5% of men (and very few women) and the two in the middle look practically the same to me. The left and third from left are almost indistinguishable too. I'm guessing this isn't the case for everybody?


How does the reduced palette work for you? https://uchu.style/simple.html


That’s purplish vs blueish, and greyish vs pinkish respectively. For me they’re different fyi.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: