Hacker Newsnew | past | comments | ask | show | jobs | submit | queuebert's commentslogin

In other words, comparatively little of their profits are returned to shareholders. Insiders are making money, but the rest of us are shut out?

It means that the infamous military industrial complex does not exist in its fabled money printing form.

Turns out, the US government is very demanding and stingy with its money for what it ultimately gets.


And the expert on boilers is probably a 50-year-old dude who repairs them for a living and who you can only find by word of mouth, not a 25-year-old just out of college with flashy pitch decks and pristine Gucci loafers.

Ultimately, a consultant is whatever you hire him to do. Sometimes that means listening to what they have to say (boiler expert, lawyer, etc.), sometimes that means having them listen to what you have to say. The 25-year-old in Gucci loafers is happy to do the latter.

Yeah if you hire a big 4. My consulting team is all people with actual industry experience ...

Exactly -- they are professional scapegoats to insulate management from the consequences of their actions.

Not nearly as tasteful as Paul Allen's card.

I'm struggling to think of a federal job in which having ChatGPT would make them more productive. I can think of many ways to generate more bullshit and emails, however. Can someone help me out?

The government has a lot of text to process, and LLMs are good at processing text, and they can be implemented pretty safely in these roles.

An obvious example might be: Someone who is trying to accomplish a task, but needs to verify the legal authorization/justification/guidelines etc to do that task. If they don't have the specific regulation memorized (e.g. the one person who was doing this esoteric task for 20 years just got laid off by DOGE) they may have to spend a lot of time searching legal texts. LLMs do a great job of searching texts in intuitive ways that traditional text searches can't.


But does the job of verifying that LLM output outweigh the job of just doing the search the old fashioned way? Probably, but we'll skip verification, just like always. This is the scariest feature of LLMs; Failure is built into the design of the system, but people just laugh and call the failures "hallucinations" and move on.

The efficiency gains from AI come entirely from trusting a system that can't be trusted


Not all implementations of LLMs are "just type questions into chatgpt.com or ollama and trust the raw result", even though that is probably what people are most familiar with right now.

They can be used pretty safely when incorporated into other systems that have guardrails on them. Not simply a dumb wrapper, but inside of systems that simply use LLMs as processing tools.

For example, one extremely safe use case is using LLMs as a search tool. Ask it to cite its sources, then string match those sources back against the source texts. You are guaranteed that the sources actually exist, because you validated it.


In cybersecurity, which in some departments is a lot of paper pushing based around RMF, ChatGPT would be a welcome addition. Most people working with RMF don't know what they're talking about, don't have the engineering background to validate their own risk assessment claims against reality, and I would trust ChatGPT over them.

Companies right now that sell access to periodicals, information databases, etc. are tacking on AI services (RAGs, I suppose) as a competitive feature (or another way to raise prices). To the degree that this kind of AI-enhanced database would also benefit the public sector, of course government would be interested.

Summarize long text, when you don't have the time to read the long version. Explain a difficult subject. Help organize thoughts.

And my favorite, when you have a really bad day and can hardly focus on anything on your own, you can use an LLM to at least make some progress. Even if you have to re-check the next day.


So, if a legislator is going to vote on a long omnibus bill, is it better that they don't read it, or that that get an innacurate summary of it, maybe with hallucinations, from an LLM ?

Or maybe they should do their job and read it ?


The simple answer to your questions is, "Yes".

But the government is a lot larger than Legislators. FAA, FDA, FCIC, etc… It's just like any (huge) private business.


Is your thought that the Federal government is only legislators?

The invention of the word processor has been disastrous for the amount of regulations that are extant. Even long-tenured civil servants won't have it all memorized or have the time to read all of thousands of pages of everything that could plausibly relate to a given portfolio.


There are 2.2 million federal workers. If you can't think of anywhere that tools like this could improve productivity, it speaks more to your lack of imagination or lack of understanding of what federal workers do than anything intrinsic to the technology.

If it were so easy, why didn't you post a few examples rather than insult me?

US Forest Service: 'hi chatgpt, here are three excel files showing the last three years of tree plantings we've done by plot and by species. Here's a fourth file in PDF format of our plot map. Please match the data and give me a list of areas that are underplanted relative to the rest, so we can plan better for this year'

I use it for stuff like this all the time in a non-government job. 100% doable without AI but takes an order of magnitude as much time. No hyperbole. People here talking about security risks are smart to think things through, but overestimate the sensitivity of most government work. I don't want the CIA using chatgpt to analyze and format lists of all our spies in China, but for the other 2.19m federal workers it's probably less of a huge deal.


And do you think ChatGPT is always doing this accurately? There is no end-to-end logic, so what you get could be either bullshit hallucination or correct. This is not the correct use of the tool right now. Maybe in the future with a different architecture.

Accurately compared to what?

In my experience, using it this way is not less accurate than a human trudging through it, and I have no end-to-end logic to verify that the human didn't make a mistake that they didn't realize they made either. So that's as good as it needs to be.


ChatGPT is just generally useful for day to day stuff without having to use it on specific domains like programming.

Quick fact checks, quick complicated searches, quick calculations and comparisons. Quick research on an obscure thing.


I'm sorry, but I feel like I have to amend your scenarios to reflect the accuracy of LLMs:

> Quick [inconsequential] fact checks, quick [inconsequential] complicated searches, quick [inconsequential] calculations and comparisons. Quick [inconsequential] research on an obscure thing.

The reason that amendment is vital is because LLMs are, in fact, not factual. As such, you cannot make consequential decisions on their potential misstatements.


These are simply implementation failures. You should be using them to gather information and references that are verifiable. There are even hallucination detectors that do some of this for you automatically.

If you are treating LLMs like all-knowing crystal balls, you are using them wrong.


> I can think of many ways to generate more bullshit and emails

Like Elon's weekly 5 bullet summary of what you did this past week :)


I'm struggling to think of a federal job in which anything, AI or otherwise, would make them more productive.

Yeah.

I work for a large telecom, and most techs complete two jobs per day.

Before computerization when everything was paper based: 2 jobs a day

With computers and remote access to test heads: 2 jobs a day

With automated end-to-end testing and dispatch: 2 jobs a day

Unless there is a financial incentive to be more productive, that outweighs any negatives of being so (e.g. peer pressure), then nothing will change.


As an aside, I think you are referring to black body sources, which is described by Planck's law. Stars dgaf about color spaces of computer monitors. They are relatively close to black bodies as thermal emitters, though, at least on main sequence and +/- some spectral lines.

We physicists never use the term Planckian for thermal black bodies. That adjective would be used in quantum mechanics, though, for very small things.


Yes, I am. Planckian radiation is the term of art in color science, prescribed by its standards body, the CIE. https://files.cie.co.at/CIE_TN_013_2022.pdf

To understand what I mean by "closest Planckian light source" see https://en.wikipedia.org/wiki/Planckian_locus


One big reason for filters in astronomy and astrophotography is to block certain frequency ranges, such as city lights.


That would trade time and frequency information for spatial information, which is what you want in astronomy, but maybe not for candid family photos.


And RIP my perfectly good and working computer without a TPM chip. Guess I'll switch it to Linux ...


Are you sure your computer really doesn't have TPM? Because Intel CPUs since Haswell and AMD CPUs since Zen 1 have firmware-level TPM (implemented at the Intel ME / AMD PSP side) built in, but disabled by default, but you can mostly turn it on in BIOS/UEFI setup interface (if the BIOS supports it), and Windows 11 will work with it. And sometimes even discrete TPMs on motherboards come disabled by default.

If you haven't already, check your BIOS for TPM/fTPM settings (or if you're on Intel also look for "Intel Platform Trust Technology" or "Intel PTT").


Go ahead. You're not their customer, they couldn't care less. Enterprise is their customer.


Over my embarrassingly long time of coding, I've gone through all of these fonts and more (VT100 anyone?) and eventually traded the sans-serif fixed-width fonts for ones with serifs, as it feels less tiring at the end of long days. For the last few years, I've used Monaspace [1] variants, especially Xenon, and enjoyed them immensely.

1. https://monaspace.githubnext.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: