Hacker Newsnew | past | comments | ask | show | jobs | submit | bdw5204's commentslogin

Another major use case for it is enabling students to more easily cheat on their homework. Which is why it is probably going to end up putting Chegg out of business.

I am shocked when I talk to college kids about AI these days.

I try to explain stuff to them like regurgitating the training data, context window limits, and confabulation.

They stick their fingers in their ears and say "LA LA LA LA it does my homework for me nothing else matters LA LA LA LA i can't hear you"

They really do not care about the Turing Test. Today's LLMs pass the "snowed my teaching assistant test" and nothing else matters.

Academic fraud really is the killer app for this technology. At least if you're a 19-year-old.


These kids are only in school to get a meal ticket to white-collar job interviews. AI frees them to be honest about their intentions, rather than pretend for long enough to stumble their way into learning something.

Maybe AI will finally skewer the myth that an undergraduate degree means anything.

AI brought “let someone else do it for you” cheating to the middle class, no longer the domain of wealthy flunkies.

The President can, in fact, recess appoint a Supreme Court justice per Article II, Section 2, Clause 3[0].

Since the George W. Bush administration, Congress has used pro forma sessions[1] to prevent recess appointments. Both the House and Senate would have to agree on a time to adjourn Congress per Article I, Section 5, Clause 4. If they disagree but one of them wants to adjourn, the President can adjourn them under Article II, Section 3. But no president has ever done this. President Trump talked about doing it to ram through his appointments both in 2020 and last year during the transition period. But so far it hasn't been deemed necessary because the Senate has, surprisingly to me, confirmed his cabinet in a timely manner and without significant pushback even on the less conventionally conservative choices like the DNI and the HHS Secretary. In all likelihood, the threat of adjourning Congress and of using his billion dollars plus of fundraising for 2026 to primary uncooperative Republican members of Congress has forced them to largely fall in line for now.

Recess appointments to the Supreme Court were common in the old days when the Court was less politically contentious. Justice William J. Brennan was recess appointed by Eisenhower and later confirmed by the Senate. A recess appointment who is not confirmed by the Senate would be null and void at the start of the next Congress on January 3rd of the next odd numbered year. I doubt any president would recess appoint a Supreme Court justice today both because it would be likely derail their nomination and also because a recess Justice might get to hear at most 1 term of cases depending on timing. Recess appointing somebody to run the FDA or the Justice Department or even to be a district court judge would be much more useful to a President's agenda.

[0]: "The President shall have Power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session."

[1]: These are sessions where they immediately adjourn by unanimous consent after doing the formalities to open the session. C-SPAN broadcasts them live and they only last a few minutes at most.


It's also worth noting that reliable daily temperature records only go back to the late 1800s. The way you know this is that there are no record highs or lows in 1788 or even 1828. Most likely, at least some of the real records were set prior to the invention of temperature measurement.

On a related note, global population data before about 1800 or so is also unreliable because censuses hadn't been invented yet. During the Enlightenment, people actually debated if world population was increasing or decreasing. Many thought it had been constantly decreasing since the decline and fall of Rome. In general, reliable statistics for more or less anything are newer than the United States of America.


Even after that records are not really up to current standards (and I am sure even current standards are not perfect - things will always go wrong). What would have been the highest temperature recorded in the UK in the 20th century not counted in records because there were issues with its reliability:

https://en.wikipedia.org/wiki/1911_United_Kingdom_heat_wave#...


>It's also worth noting that reliable daily temperature records only go back to the late 1800s. The way you know this is that there are no record highs or lows in 1788 or even 1828. Most likely, at least some of the real records were set prior to the invention of temperature measurement.

There's a whole meta-genre of academic papers that consist of taking unreliably measurement logs from prior centuries, comparing them to what the current best scientific understanding of the field is and then saying "aha, X wasn't mistaken about observing/measuring/concluding Y in conditions/location Z, his instrumentation was likely out of tune by a factor of N, if we re-do his math with the following error bars and plot the results you can see that what he reported is within the limits of our understanding of the subject today".

I'm not gonna say it's useful or useless science, but it sure is interesting to find out how close to modern understanding some of those guys back then were within their niches if you account for the quality of their equipment despite sometimes very unscientific conditions.


> The way you know this is that there are no record highs or lows in 1788 or even 1828.

This is not specific to weather, although one interesting example would be California in November 1861 - January 1862. Most people think of the gold rush, but there was also a 20 year drought that ended with the largest flood in recorded history. 10 feet of precipitation in California, in the form of rain and snow, over a period of 43 days. It was followed by a huge bloom in vegetation, and the rancho cattle population quadrupled. Then another drought in 1864, that wiped out most of the cattle. And a smallpox epidemic that wiped out 90% of the remaining native population.

https://en.wikipedia.org/wiki/Great_Flood_of_1862

https://www.latimes.com/archives/la-xpm-1991-06-13-nc-780-st...


Uhm, censuses are described in the Bible - in fact one has a central enough role that even a committed heathen like me is aware of it -, and existed many places on a similar timeline. I have no problem believing that they were imprecise, and not widespread enough to give good global numbers, but they had certainly been invented much earlier.


There's also the Domesday Book (https://en.wikipedia.org/wiki/Domesday_Book), which wasn't an exact census but it did log 268,984 people considered to be the head of a household; based on this and all the ifs and buts around the number, they estimated the population of Wales and England in 1086 between 1.2 and 1.6 million people.

But it was more a taxation thing.


> But it was more a taxation thing.

Herod's census was a tax thing too. Censuses are very expensive, and only even vaguely reliable if you threaten people with dire consequences for not taking part properly. So they don't often get done for funsies.


Given that we basically use the same technique for censuses today, the ancient ones probably weren’t especially less reliable.


Yes, but they were imprecise and inconsistent.

The Roman Empire had a motive to take a census (for things such as taxation of its subjects) and the means to enforce it over a wide area, neither of which survived its fall.


Hence:

> I have no problem believing that they were imprecise

I only took issue with the claim they were invented that late.


This is just another reason why dependencies are an anti-pattern. If you do nothing, your software shouldn't change.

I suspect that this style of development became popular in the first place because the LGPL has different copyright implications based on whether code is statically or dynamically linked. Corporations don't want to be forced to GPL their code so a system that outsources libraries to random web sites solves a legal problem for them.

But it creates many worse problems because it involves linking your code to code that you didn't write and don't control. This upstream code can be changed in a breaking way or even turned into malware at any time but using these dependencies means you are trusting that such things won't happen.

Modern dependency based software will never "just work" decades from now like all of that COBOL code from the 1960s that infamously still runs government and bank computer systems on the backend. Which is probably a major reason why they won't just rewrite the COBOL code.

You could say as a counterargument that operating systems often include breaking changes as well. Which is true but you don't update your operating system on a regular basis. And the most popular operating system (Windows) is probably the most popular because Microsoft historically has prioritized backward compatibility even to the extreme point of including special code in Windows 95 to make sure it didn't break popular games like SimCity that relied on OS bugs from Windows 3.1 and MS-DOS[0].

[0]: https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii...


What are you advocating for? Zero external dependencies? Write a new YAML parser from scratch? Rolling your own crypto?


All the use of real names on social media accomplishes is a chilling effect on speech. Especially if your opinions differ from those of your employer or customers. Or if people who disagree with you are engaging in harassment campaigns or domestic terrorism against their political opponents.

This can apply to either side. Whether you're a Trump voter in San Francisco or an LGBTQIA+ person in a rural "Bible Belt" community. Doxxing is one of the most serious rules violations on the internet because exposing somebody's real world identity endangers the personal safety of the victim. A real names only policy effectively forces everybody to self-dox or be silenced.


yes, realnames depends on out-of-band harrassment being illegal and suppressed.

but it also means responsibility, which acts in the opposite direction.

a realnames policy doesn't preclude also offering pseudonyms for sensitive cases.

to me, it seems like the real issue is sockpuppetry - basically a Sybil attack on authenticity/reputation/responsibility.


It's perfectly readable on Brave for Android. The text even wraps to the screen size so you don't have to scroll.

Which phone browser renders it in an unreadable manner?


Any phone browser that rendered the page before they added a simple fix


The best thing for Brave to do would just be to build it into their own ad blocker because Google is going to intentionally make it more and more impractical to support older extensions that interfere with their business model.


I don't think there was ever a time when you could succeed in the marketplace on the merits of your tech. Once the tech reaches the relatively low bar of "good enough", the rest is sales and marketing. In the most lucrative enterprise market, the "good enough" bar is even lower than in the much less lucrative consumer market because the people who will actually have to use your tech aren't the ones buying it. Technical quality likely matters the most to "customers" who don't pay anything such as users of popular open source projects.

If you want to make money from a good product then becoming a social media influencer who talks about your product is the most straightforward way to advertise without having to pay for ads.


> In the most lucrative enterprise market, the "good enough" bar is even lower than in the much less lucrative consumer market because the people who will actually have to use your tech aren't the ones buying it.

This reminds me of the time Citi lost $900 million due to terrible software [0].

[0] https://www.bloomberg.com/opinion/articles/2021-02-17/citi-c...


If success is financial then no. But you could build a reputation from some brilliant and easy to use/integrate tech. Which is in the hacker's spirit and the backbone of so many "successful" software.

It also set forth a very straightforward "success" path: make good software that the community praises and tech companies (who probably also use your code) will climb over each other to get you on their team. But I suppose those days are slowly ending as well now that many problems (for all but the biggest tech companies) are being solved. No need for a million dollar mastermind when a 100k mid-level can do the job.


I've got reasonable financial success in my software product mainly from the merits of the tech. Pretty much all the marketing I ever did was spamming a small mailing list once, making an anonymous website that subtly mentioned my product, got good Google ranking, and got referenced by other people, and eventually making a Wikipedia page which I'm not sure does any good. I got good ranking in Google early on without any particular effort, probably because there just aren't many competitors.

Other people have done a lot to help at their own initiative though. Resellers approach me and market it themselves, customers recommend it on forums, researchers mention it in their published papers, and one customer even wrote a chapter of a book about it - which was key to being eligible for a Wikipedia page.

I'm lucky though because it belongs to a slow moving, well-defined class of products that people in my target industry already understand so when they go looking for a cheaper alternative to the super-priced big names, they find mine. I'm not inventing a new market.

It's not free money though. It's very code-heavy and technical-understanding-heavy and I've spent nearly 20 years actively developing it by now. One man wouldn't be able to just smash one out in a year, and you'd need some reasonably deep domain knowledge.


What's the product if I may ask? Or at least the general category of products you operate in - if you don't mind mentioning it.


I'm a bit shy about the details but it's used by engineers. Actually there's a lot of opportunity in software for engineers. They pay huge prices and the quality of what they have is often poor. I'm aware of some gaps in the market. For example, modeling thermal distortion due to robot welding. That's not what my product does but that's one where the existing solutions are something like $50,000/year and it's a hard problem in part because the software has to run faster than an actual welding robot making a prototype to be economical to model it in the first place. It takes some clever research to invent the secret sauce to get those speeds.


That's an interesting example, thanks for sharing.


One reasonable compromise would be for video makers to provide a transcript or written article to complement their video. Video is a terrible format especially when you're actually using the video and not just using it as a mechanism to deliver audio. Audio is not a bad medium because you can do something else while listening to it.


I imagine most exit nodes are likely controlled by the US government and/or its close allies. Who else wants to have their IP address banned from most of the internet and potentially get visits from their country's equivalent of the FBI?

If most Tor users ran exit nodes and most people used Tor, it would effectively make internet traffic anonymous. But without those network effects, it is vulnerable by design to deanonymization attacks by state actors.


I run an exit node, and I know several people who do, I dont suspect any of them to be anything but people who care about privacy, surveillance, and helping people get access to the free internet from restrictive locations. I admit, I bristled at your comment, because I do not like myself, the EFF, and many of my close friends being imagined as part of the US Government.


I ran an exit node for a while, and found myself auto-banned from so many services that I stopped running the node and threw away my IP range (which now would be worth $$$ - oh well!)


I ran Tor nodes, had a bunch of blacklisted IPs, and just stopped running them and it was fine? Blacklisting Tor nodes requires updating the data often, so it falls off pretty quickly. To discard an entire /24 would be pretty funny over that!


Most people just use a DNSBL to block Tor exit nodes. They're pretty trivial to find online and presumably, very easy to set up because the list of Tor exit nodes is publicly available.

This also means the expiry time is usually tied to however long a Tor exit node stays on the DNSBL + 3 or so days (depends on how long the software is configured, but 3 days is typically the assumed default for IPs that tend to get mixed up with automated spam, of which Tor is also a massive purveyor.)


It's recommended to put an exit node on its own dedicated IP address.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: