The area where I see this making the most transformational change is by enabling average citizens to ask meaningful questions about the finances of their local government. In Cook County, Illinois, there are hundreds of local municipalities and elected authorities, all of which are producing monthly financial statements. There is not enough citizen oversight and rarely any media attention except in the most egregious cases (e.g. the recent drama in Dolton, IL, where the mayor is stealing millions in plain view of the citizens).
The citizens ask LLMs (or more advanced future AIs) to identify if government finances are being used efficiently, and if there is evidence of corruption.
The corrupt government officers then start using the AIs to try to cover up the evidence of their crimes in the financial statements. The AI possibly putting the skills of high-end and expensive human accountants (or better) into the hands of local governments.
Corrupt government officers are one thing. But there is a ton of completely well-meaning bureaucracy in the U.S. (and everywhere!) that could benefit from a huge, huge step change in "ability to comprehend".
Bad actors will always exist but I think there's a LOT of genuine good to be done here!
If we put the right checks and balances (powered by AI) in place now, we can front run the criminals, both the obvious and non-obvious crimes. We can shine light in more places and push the corruption further out of the system.
> The corrupt government officers then start using the AIs
You're making it way too complicated. The government will simply make AI illegal and claim it's for safety or something. The'll then use a bunch of scary words to demonize it, and their pals in the mainstream media will push it on low-information voters. California already has a bill in the works to do exactly this.
> The corrupt government officers then start using the AIs to try to cover up the evidence of their crimes in the financial statements.
There's a difference between an AI being able to answer questions and it helping cover up evidence, unless you mean "using the AIs for advice on how to cover up evidence"
that's what I did with my town financial report. Asked chatGPT to find irregularities.
The response was very concerning, with multiple expenses that looked truly very suspicious (like planting a tree - 2000$).
I would have gone berserk at the town council meeting if I was an activist citizen.
I think this is in general one of the big wins with LLMs: Simple summarization. I first encountered it personally with medical lab reports. And as I noted in a past comment, GPT actually diagnosed an issue that the doctors and nurses missed in real-time as it was happening.
The ability to summarize and ask questions of arbitrarily complex texts is so far the best use case for LLMs -- and it's non-trivial. I'm ramping up a bunch of college intern devs and they're all using LLMs and the ramp up has been amazingly quick. The delta in ramp up speed between this and last summer is literally an order of magnitude difference and I think it is almost all LLM based.
> citizens to ask meaningful questions about the finances of their local government.
is there a demand for this. I live in cook country. I really don't want to ask these questions. Not sure what I get out of asking these questions other than anger and frustration.
> if all the citizens can ask these questions, I think it will make a difference.
Our major just appointed some pastor to a high level position in CTA( local train system) as some sort of patronage.
Thats the a level things operate in our govt here. I am skeptical that some sort of data enlightenment in citenzery via llm is what is need for change.
Then anything you plan is doomed from the start. If companies start slipping cyanide into their food it would take at least 20 years for people to stop buying it. Getting everyone to simply do your thing while they're busy with their own life is a fool's errand.
Most people won't care most of the time. But if the local government cuts the budget for something you like and says "We couldn't find the money," you may care that year.
Let's say LLMs work exactly as advertised in this case: you go into the LLM, say "find corruption in these financial reports", and it comes back with some info about the mayor spending millions on overpriced contracts with a company run by his brother. What then? You can post on Twitter, but unless you already have a following it's shouting into the void. You can go to your local newspapers, they'll probably ignore you; if they do pay attention, they'll write an article which gets a few hundred hits. If the mayor acknowledges it at all, they'll slam it as a political hit-piece, and that's the end of it. So your best chance is... hope really hard it goes viral, I guess?
This isn't meant to be overly negative, but exposing financial corruption is mostly about information control; I don't see how LLMs help much here. Even if/when you find slam-dunk evidence that corruption is occurring, it's generally very hard to provide evidence in a way that Joe Average can understand, and assuming you are a normal everyday citizen, it's extremely hard to get people to act.
As a prime example, this bit on the SF "alcohol rehab" program[0] went semi-viral earlier this week; there's no way to interpret $5 million/year spent on 55 clients as anything but "incompetence" at best and "grift and corruption" at worst. Yet there's no public outrage or people protesting on the streets of SF; it's already an afterthought in the minds of anyone who saw it. Is being able to query an LLM for this stuff going to make a difference?
Also, per the link, cheaper than emergency room visits and ambulance transports:
> But San Francisco public health officials found that the city saved $1.7 million over six months from the managed alcohol program in reduced calls to emergency services, including emergency room visits and other hospital stays. In the six months after clients entered the managed alcohol program, public health officials said visits to the city’s sobering center dropped 92%, emergency room visits dropped more than 70%, and EMS calls and hospital visits were both cut in half.
> Previously, the city reported that just five residents who struggled with alcohol use disorder had cost more than $4 million in ambulance transports over a five-year period, with as many as 2,000 ambulance transports over that time. [emphasis mine]
> The San Francisco Fire Department said in a statement that the managed alcohol program has “has proven to be an incredibly impactful intervention” at reducing emergency service use for a “small but highly vulnerable population.”
Beautifully stated. I can only speculate, but I'd say the reason it is this way is due to the collective apathy/cynicism toward government. We have collectively come to expect a certain level of corruption and influence peddling. We have a high tolerance for incompetence in carrying out government operations. Only the most egregious offenders are brought to the public's attention, and in an age of increasingly short attention spans, people have forgotten by the time elections roll around.
That is, if they vote in the first place - in that example I gave above of a corrupt mayor stealing millions (Tiffany Henyard of Dolton, IL), the voter turnout was only 15%.
Why would you report financial crimes to Twitter? If your LLM uncovers financial crimes you should contact regulators and prosecutors. They're both incentivized to do something about it.
Oh yeah. This. I live in a tiny community it our district school board has a $54 million budget right now, and all the audits are rubber stamps and wink and nudge from the State. When residents try to dig in and complain about waste and fraud we are shrugged off.
It’s your assumption that the lack of oversight is because of too much information. How will you validate that hypothesis before you invest in a solution?