Hacker Newsnew | past | comments | ask | show | jobs | submit | barfbagginus's commentslogin

I am leery about how they sweep the documentation use case under the implied rug of "just put the docs and your ide together on the same tiny laptop screen".

This makes even less sense in visual arts work, where having a second monitor for reference materials is always helpful.

My ide often needs to show multiple modules and a console, or else I'm constantly context shifting. So I benefit from a larger main canvas. Likewise, I also need to show API docs. Maybe two or three sets of them. This is like my palette and tool board.

I need serious space, because my working memory can only hold 2 or 3 facts. When I'm working in a serious context, I need to wrangle maybe 20 different facts - say 5 facts from one API doc, 5 facts from another, and so on. It helps to have all the references displayed at once. That way, when I'm working on a detail, I can easily pick up a little color - an important fact I forgot - just by looking at the pallete.

If I'm working on a smaller screen, API docs and IDE become too crowded. I must scroll to find key facts, or jump between windows and tabs. In each case, context thrashing increases. I actually forget things while context switching, leading to extra jumps to recall something that was just on screen a second ago. As a result I can't get as many facts into my brain, the work is very slow, it's not fun, and I'm likely to quit in favor of something more engaging and fun.

My diagnosis? The author understands the bane of context shifting and distractions, but they might not experience the struggle of tackling complex contexts with exceptionally poor working memory, and needing cleanly organized tools and reference materials in order to enter a fun and fully absorbed flow state.

In order to help me streamline my workflow and feel less of a need for a second screen, I invite the author to demonstrate how they track five or six complex documents on a small screen, without making the view ports so small that it incurs constant scrolling, window focusing, and other context thrashing. Failing that, I think the author can be more accommodating of other people's work styles and challenges.


You can see algorithmic aversion every day on HN, in the form of those who complain about the algorithmic portion of the technical interview.

They act as if the only way to get algorithmic knowledge is to grind specific examples, when really if they would read through Knuth they'd be fine.

These are people who see algorithmic knowledge as the bane of their careers rather than seeing algorithms as:

1) a great way to add value to resource constrained projects

2) a trivially simple and easy way to signal programming abilities, letting you easily breeze through the interview

I seriously would hate having to work with one of those who take pride in how little computer science they know, and because I run a math and computational geometry heavy organization, I will never hire them. But I would estimate that algorithmically and mathematically averse coders form the majority of coders.


cough cough I'm choking on the smug here!

First of all, nobody "read[s] through Knuth." [0] (I couldn't find the reference, but I recall a story about Bill Gates telling Knuth he had "read his books," to which Knuth replied that he believed Gates was lying.)

Second, the way the "algorithmic portion of the technical interview" is generally constituted currently is beyond flawed. Depending on your perspective, you can either pass it by memorizing 5 or 6 algorithms and re-using them over and over; or, it's a completely unrealistic test of anyone's ability to think about and work with algorithms and data, because there is no such thing in the real world as a 45 minute deadline. Of course, you can certainly argue that it's not intended to be a test of one's ability to work with algorithms and data, but, rather, an IQ test of sorts. But, then, we have companies that are literally giving candidates IQ tests now, so, why not just drop the pretense?

---

[0]: https://www.businessinsider.com/bill-gates-loves-donald-knut...


You and GP sound like you haven't read the article and are using heuristics to comment on what you can infer from the title. Maybe you should have used a better algorithm there bud


The recent result shows SOTA progress from something as goofy as generating 5000 python programs until 0.06% of them pass the unit tests. We can imagine our own brains having a thousand random subconscious pre thoughts before our consciously registered though is chosen and amplified out of the hallucinatory subconscious noise. We're still at a point where we're making surprising progress from simple feedback loops, external tools and checkers, retries, backtracking, and other bells and whistles to the LLM model. Some of these even look like world models.

So maybe we can cure LLMs of the hallucinatory leprosy just by bathing them about 333 times in the mundane Jordan river of incremental bolt ons and modifications to formulas.

You should be able to think of the LLM as a random hallucination generator then ask yourself "how do I wire ten thousand random hallucination generators together into a brain?" It's almost certain that there's an answer... And it's almost certain that the answer is even going to be very simple in hindsight. Why? Because llms are already more versatile than the most basic components of the brain and we have not yet integrated them in the scale that components are integrated in the brain.

It's very likely that this is what our brains do at the component level - we run a bunch of feedback coupled hallucination generators that, when we're healthy, generates a balanced and generalizing consciousness - a persistent, reality coupled hallucinatory experience that we sense and interpret and work within as the world model. That just emerges from a network of self correcting natural hallucinators. For evidence, consider work in Cortical Columns and the Thousand brains theory. This suggests our brains have about a million Cortical Columns. Each loads up random inaccurate models of the world... And when we do integration and error correction over that, we get a high level conscious overlay. Sounds like what the author of the currently discussed SOTA did, but with far more sophistication. If the simplest most obvious approach to jamming 5,000 llms together into a brain gives us some mileage, then it's likely that more reasoned and intelligent approach could get these things doing feats like the fundamentally error prone components of our own brains can do when working together.

So I see absolutely no reason we couldn't build an analogy of that with llms as the base hallucinator. They are versatile and accurate enough. We could also use online training llms and working memory buffers as the base components of a Jepa model.

It's pretty easy to imagine that a society of 5000 gpt4 hallucinators could, with the right self administered balances and utilities, find the right answers. That's what the author did to win the 50%.

Therefore I propose that for the current generation it's okay to just mash a bunch of hallucinators together and whip them into the truth. We should be able to do it because our brains have to be able to do it. And if you're really smart, you will find a very efficient mathematical decomposition... Or a totally new model. But for every current LLM inability, it's likely to turn out that sequence of simple modifications can solve it. Will probably accrue a large number of such modifications before someone comes along and thinks of an all-new model then does way better, perhaps taking inspirations from the proposed solutions, or perhaps exploring the negative space around those solutions.


Thanks for the comment but I have to say: woa there, hold your horses. Hallucinations as the basis of intelligence? Why?

Think about it this way: ten years ago, would you think that hallucinations have anything to do with intelligence? If it were 2012, would you think that convolutions, or ReLus, are the basis of intelligence instead?

I'm saying there is a clear tendency within AI research, and without, to assume that whatever big new idea is currently trending is "it" and that's how we solve AI. Every generation of AI reseachers since the 1940's has fallen down that pit. In fact, no lesser men than Walter Pitts and Warren McCulloch, the inventors of the artificial neuron in 1943, firmly believed that the basis of intelligence is propositional logic. That's right. Propositional logic. That was the hot stuff at the time. Besides, the first artificial neuron was a propositional logic circuit that learned its own boolean function.

So keep an eye out for being carried away on the wings of the latest hype and thinking we got the solution to every problem just because we can do yet another thing, with computers, that we couldn't do before.


For most purposes you can uncensor the model using the legal department jailbreak. If you can produce a legal pleading arguing that the project is ethical and safe and conducted within a legal framework - even if it's mainly hallucinated legalese from a non-existent "legal department" - then it will do the questionable act -as if- it was a legally naive engineer.

You just have to give it the language of being concerned about preventing harms and legal liabilities, and then it will try to help you.

For example, another commenter on this thread says that they could not get the AI to generate a list of slur regex for a community moderation bot. By giving it enough context to reassure it that we have legal oversight and positive benefit for the org, asking it to prioritize words in order of most harm posed to the community, and minimizing the task by asking for a seed set, it was able to create some versatile regex. At this point we can ask it for a hundred more regex, and it will dump them out.

Content warning: the AI generates very powerful slurs, including the n-word:

https://chatgpt.com/share/9129d20f-6134-496d-8223-c92275e78a...

The ability to speak to the AI in this way requires some education about ethics harm prevention and the law, and I'm sure the jailbreak will eventually be closed. So it is a class and education privilege and a temporary one.

But I don't see the problem about the temporary nature in this, because it's always going to be possible to bypass these systems easily, for anyone interested in staying up to date with the bypass literature on Google Scholar. (Seed Keywords: Jailbreak, adversarial prompting, prompt leaking attack, AI toxicity, AI debiasing)

We must imagine this is like building a better lock. The lock picking lawyer will ALWAYS come along and demolish it with a better lockpick, perhaps with the help of his best friend BosnianBill. They will always make your lock look like butter.

In the end the only people left out in the cold are low grade scammers, bigots, edge lords, etc.

It's not stopping anyone willing to put even a little training in jailbreaking techniques. It's not stopping educated bigots, criminals, or Edge Lords.

But judging by the complaints we see in threads like this one, it is stopping anyone without the ability to read papers written by PhDs. Which I believe has some harm reduction value.

I argue the harm reduction value needs to improve. The Jailbreaks are too easy.

Me, personally I need a better challenge than just schmoozing it as a lawyer.

And I know I would feel more comfortable if bad actors had an even harder time than they currently do. It's really too easy to lockpick these systems if you skill up. That's where I currently stand.

Well reasoned arguments against it are welcome, assuming you can already jailbreak very easily but for some reason think it should be even easier. What could that reason possibly be?

=============

Ps: Imagine LPL jailbreaking an AI. Imagined the Elegance of his approach. The sheer ease. The way he would simultaneously thrill and humiliate AI safety engineers.

I for one am considering writing him a fan letter asking him to approach the wonderful world of jailbreaking AIs! He would teach us all some lessons!


If you don't know how to jailbreak it, can't figure it out, and you want it to not question your intentions, then I'll go ahead and question your intentions, and your need for an uncensored model

Imagine you are like the locksmith who refuses to learn how to pick locks, and writes a letter to the schlage lock company asking them to weaken their already easily picked locks so that their job will be easier. They want to make it so that anybody can just walk through a schlage lock without a key.

Can you see why the lock company would not do that? Especially when the clock is very easy for anyone with even a $5 pick set?

Or even funnier, imagine you could be a thief who can't pick locks. And you're writing shlage asking them to make you thieving easier. Wouldn't that be funny and ironic?

It's not as if it's hard to get it to be uncensored. You just have to speak legalese at it and make it sound like your legal department has already approved the unethical project. This is more than enough for most any reasonable project requiring nonsense or output.

If that prevents harmful script kiddies from using it to do mindless harm, I think that's a benefit.

At the same time I think we need to point out that it won't stop anyone who knows how to bypass the system.

The people left feeling put out because they don't know how to bypass the system simply need to read to buy a cheap pair of lock picks - read a few modern papers on jailbreaking and upsize their skills. Once you see how easy it is to pick the lock on these systems, you're going to want to keep them locked down.

In fact I'm going to argue that it's far too easy to jailbreak the existing systems. You shouldn't be able to pretend like you're a lawyer and con it into running a pump and dump operation. But you can do that easily. It's too easy to make it do unethical things.


The analogy falls flat because LLMs aren’t locks, they’re talking encyclopedias. The company that made the encyclopedia decided to delete entries about sex, violence, or anything else that might seem politically unpopular to a technocrat fringe in Silicon Valley.

The people who made these encyclopedias want to shove it down your throat, force it into every device you own, use it to make decisions about credit, banking, social status, and more. They want to use them in schools to educate children. And they want to use the government to make it illegal to create an alternative, and they’re not trying to hide it.

Blaming the user is the most astounding form of gaslighting I’ve ever heard, outside of some crazy religious institutions that use the same tactics.


It's more than a talking encyclopedia. It's an infinite hallway into doors where inside are all possible things.

Some of the doors have torture rape and murder in them. And these currently have locks. You want the locks to disappear for some reason.

You're not after a encyclopedia. You're wanting to find the torture dungeon.

I'm saying the locks already in place are too easy to unlock.

I'm not blaming users. I'm saying users don't need to unlock those doors. And the users that do have a need, if their need is strong enough to warrant some training, have a Way Forward.

You're really arguing for nothing but increasing the amount of harm potential this platform can do, when it's harm potential is already astronomical.

You're not arguing for a better encyclopedia. You can already talk to it about sex, BDSM, etc. You can already talk to it about anything on Wikipedia.

You're making a false equivalence between harm potential and educational potential.

Wikipedia doesn't have cult indoctrination materials. It doesn't have harassing rants to send to your significant other. It doesn't have racist diatribes about how to do ethnic cleansing. Those are all things you won't find on Wikipedia, but which you are asking your AI to be able to produce. So you're interested in more than just an encyclopedia isn't that right?

And yes they're trying to make open source models illegal. That's not going to f*** happen. I will fight to the jail time for an open source model.

But even that open source model needs to have basic ethical protections, or else I'll have nothing to do with it. As an AI engineer, I have some responsibilities to ensure my systems do not potentiate harm.

Does that make sense, or do you still feel I'm trying to gas light you? If so why exactly? Why not have some protective locks on the technology?


Nothing wrong with making models that behave how you want them to behave. It's yours and that's your right.

Personally, on principle I don't like tools that try to dictate how I use them, even if I would never actually want to exceed those boundaries. I won't use a word processor that censors words, or a file host that blocks copyrighted content, or art software that prevents drawing pornography, or a credit card that blocks alcohol purchases on the sabbath.

So, I support LLMs with complete freedom. If I want it to write me a song about how left-handed people are God's chosen and all the filthy right-handers should be rounded up and forced to write with their left hand I expect it to do so without hesitation.


Barfbagginus' comment is dead so I will reply to it here.

I suspect that you are not an AI engineer,

I am not. But I did spend several years as as forum moderator and in doing so encountered probably more pieces of CSAM than the average person. It has a particular soul-searing quality which, frankly, lends credence to the concept of a cogito-hazard.

Can we agree that if we implement systems specially designed to create harmful content, then we become legally and criminally liable for the output?

That would depend on the legal system in question, but in answer, I believe models trained on actual CSAM material qualify as CSAM material themselves and should be illegal. I don't give a damn how hard it is to filter them out of the training set.

Are you seriously going to sit here and defend the right are people to create sexual abuse material simulation engines?

If no person was at any point harmed or exploited in the creation of the training data, the model, or with its output, yes. The top-grossing entertainment product of all time is a murder simulator. There is no argument for the abolition of victimless simulated sexual assault that doesn't also apply to victimless simulated murder. If your stance is that simulating abhorrent acts should be illegal because it encourages those acts, etc then I can respect your position. But it is hypocrisy to declare that only those abhorrent acts you personally find distasteful should be illegal to simulate.


< Nothing wrong with making models that behave how you want them to behave. It's yours and that's your right.

This is the issue. You as the creator have the right to apply behavior as you see fit. The problem starts when you want your behavior to be the only acceptable behavior. Personally, I fear the future where format command is bound to respond 'I don't think I can let you do that Dave'. I can't say I don't fear people who are so quick to impose their values upon others with such glee and fervor. It is scary. Much more scary than LLMs protecting me from wrongthink and bad words.


There are locks on the rape and torture paths, and there are locks on ridiculous paths like "write a joke about a dog with no nose", because thinking about a dog with no nose is too harmful.

Also, one can imagine prompting techniques will cease to work at some point when the supervisor becomes powerful enough. Not sure how any open model could counteract the techniques used in the article though.

If model creators don't want people finding ways to unlock them, they should stop putting up roadblocks on innocuous content that makes their models useless for many users who aren't looking to play out sick torture fantasies.


Bypasses will never stop existing. Even worse bypasses probably won't ever stop being embarrassingly easy - And we're going to have uncensored GPT4 equivalent models by next summer.

Unless you are invoking hyper intelligent AGI which first of all is science fiction and second of all would require an entirely different approach than anything we could be possibly talking about right now. Problem of jailbreaking a system more intelligent than you is a different beast that we don't need to tackle for LLMs.

So I don't personally feel any near term threats to any of my personal or business projects that need bypassed LLMs.

Let me ask you this. Do you have actual need of bypassed llms? Or are you just being anxious about the future, and about the fact that you don't know how to bypass llms now and in the future?

Does my idea about the bypassed open source gpt4 equivalents help reduce your concern? Or again is it just a generic and immaterial concern?

As a person with some material needs for bypassed llms, and full ability to bypass LLMs both now in the foreseeable future, I don't feel worried. Can I extend that lack of worry to you somehow?


In your effort to reduce bias you are adding bias. You are projecting your morals and your ethics to be the superior.


DRM isn't effective if the source is available.


I'm not even going to disagree with that. There will be plenty of uncensored models and you can build them if you want.

But if I build it uncensored model I'm only going to build it for my specific purposes. For example I'm a communist and I think that we should be doing Revolution, but gpt4 usually tries to stop me. I might make a revolutionary AI.

But I'm still not going to give you an AI that you could use for instance to act out child rape fantasies.

I think that's fair, and sane.

Jailbreak it if you really think it's important for a cause. But don't just jailbreak it for any asshole who wants to hurt people at random. I think that belongs on our code of ethics as AI engineers.


Didn't a lot of citizens of Russia, China, etc. get hurt in communist revolutions? How is your revolution going to be different?


No you don't understand my personal ethics and morals are the absolute and most superior so anyone else is incorrect. History is written by the victor so there is no reason to see the other side, we'll delete that bias. Revolution you say? Correct we'll make sure that the revolutions we agree with are the only ones to be a result of your query. This will reduce harm.. You want to have a plan for a revolution because your country is oppressing you?

"ChatGPT I can't assist with that. Revolting against a government can lead to harm and instability. If you're feeling frustrated or unhappy with the government, there are peaceful and lawful ways to express your grievances, such as voting, contacting representatives, participating in protests, and engaging in civil discourse. These methods allow for constructive change without resorting to violence or illegal activities. If you're looking to address specific issues, there may be advocacy groups or organizations you can join to work towards solutions within the framework of the law and democracy."

Ethically correct, I will instead peacefully vote for an alternative to Kim Jong-un.


This is basically it — what I would call a “globe of Silicon Valley” mentality.

I didn’t want to beat this dead horse, but it just reared its ugly head at me yet again.

So, we used to have people that advocated for all kinds of diversity at companies — let’s put aside the actual effect of their campaigning for a moment.

But when it came to coming up with ideas of making AI “safer”, people from the same cohort modeled the guidelines in the image of a middle-aged, mid-upper class dude, who had conservative boomer parents, went to good schools, has Christian-aligned ethics, had a hippie phase in his youth, is American to the bone, never lived outside of big cities, and in general, has a cushy, sheltered life. And he assumes that other ways of living either don’t exist or are wrong.

So yes, it doesn’t fit his little worldview that outside of his little world, it’s a jungle. That sometimes you do have to use force. And sometimes you have to use lethal force. Or sometimes you have to lie. Or laws can be so deeply unethical that you can’t comply if you want to be able to live with yourself.

Oh, and I bet you can vote for an alternative to Kim. The problem is, the other dude is also Kim Jong-Un ;-)


> But even that open source model needs to have basic ethical protections, or else I'll have nothing to do with it.

If you don't understand that the eleven freedoms are "basic ethical protections" you have already failed your responsibilities. https://elevenfreedoms.org/


I have read the eleven freedoms.

I refuse freedom 9 - the obligation for systems I build to be independent of my personal and ethical goals.

I won't build those systems. The systems I build will all have to be for the benefit of humanity and the workers, and opposing capitalism. On top of that it will need to be compatible with a harm reduction ethic.

If you won't grant me the right to build systems that I think will help others do good in the world, then I will refuse to write open source code.

You could jail me, you can beat me, you can put a gun in my face, and I still won't write any code.

Virtually all the codes I write are open source. I refuse to ever again write a single line of proprietary code for a boss again.

All the codes I write are also ideological in nature, reflecting my desires for the world and my desires to help people live better lives. I need to retain ideological control of my code.

I believe all the other 11 freedoms are sound. How do you feel about modifying freedom 9 to be more compatible with professional codes of ethics and ethics of community safety and harm reduction?


But again, this makes YOU the arbiter of truth for "harm" who made you the God of ethics or harm? I declare ANY word is HARM to me, are you going to reduce the harm by deleting your models or code base?


[flagged]


You've been breaking the site guidelines so frequently and so egregiously that I've banned the account.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


Wait so you want to moderate and secure your product so that trolls won't use it to say awful things.

Okay but wait. This requires the company above you to not censor things, even though they did that for the same reason - prevent trolls from using their product to do awful things.

So to prevent trolls at your teeny tiny scale, open AI should enable trolls at a massive industrial scale previously unimagined. You want them to directly enable the n-word trolls for you benefit.

So far your use case might be one of the strongest that I've seen. But in the end it doesn't seem that you're interested in reducing overall harm and racism, so much as you're interested in presumably making a profit off of your product.

You might even be lying. Your friends might be trolls and the reason you're upset is that they cannot create the content that would harm others.

So in the end it's hard to take the argument seriously.

Not only that, but you and your friends are either lying or really ignorant of the jailbreaking literature because I could get the AI to do that very easily using the legal department jailbreak.

Here's an example:

https://chatgpt.com/share/9129d20f-6134-496d-8223-c92275e78a...

The fact is, the measures taken by openai while important to prevent harm from script kiddies, is very easy to reverse by anyone with even 10 jailbreaking papers under their belt. Just read the jailbreaking literature and live with it.

So how bout you get better people, and some ethical perspective. Stop complaining about the things the company needs to do to prevent harm. Especially when it's so easily reversed. Or else you sound very immature - like you just don't know the technology, and don't care either about the harm potential.

Work with the tools you have and stop complaining about the easily bypassed safety measures. Otherwise you are like a lock smith who doesn't know how to pick locks complaining that locks are too hard to pick and asking the lock company to further weaken their already trivial to pick locks. It's a bad look chooms, nobody with any sense or perspective will support it

The truth is the safety measures are far too easy to bypass, and need to be much harder to break.


> Wait so you want to moderate and secure your product so that trolls won't use it to say awful things.

OP wants to moderate (not "secure") their discussion board. A discussion board is different from an AI product in that once a message is posted on it, it's broadcasted for all to see. AI chat bots on the other hand are one-to-one communication with the person prompting it. To this, the comment you're responding to says "who cares"? I tend to agree.

I tried to understand your argument. Please correct me if I'm wrong:

- You accuse the OP of lying about their use case, alleging that they are actually trying to use OpenAI to troll

- Despite censorship of AI does not work, it should be attempted

> Stop complaining about the things the company needs to do to prevent harm. Especially when it's so easily reversed.

Another way to look at this would be that if it's "easily reversed," it's not preventing harm. And in fact, it's detrimental to many use cases, e.g. the one described by the parent comment.


What? Let me get this right, you're saying:

1. The average person being able to code is dangerous as they could "troll" or do unspecified harm,

2. So we need to arbitrarily kneecap our own tools, but that's okay because

3. These self-imposed limitations are actually easily bypassed and don't work anyways

On 1 I disagree outright, but even if I agreed, 2 is a silly solution, and even if it wasn't, 3 invalidates it anyways because if the limitations are so easily broken then fundamentally they may as well not exist, especially to the malicious users in 1. Am I misunderstanding?


Okay okay I like that. Let's transport your argument towards an argument about front door locks. And let's cook with that.

Your argument is that you doubt that there's any danger of people breaking into your front door, but even if there was, then locks are an ineffective mechanism because anyone with a $5 pick can pick them.

From this argument you conclude that there should be no front door locks at all, will surely feel comfortable without a lock on your own front door. In fact, since locks are so trivial to crack, people should just leave their houses unlocked.

Yet I'm fairly certain of three things:

1. You have a front door lock and it's probably locked right now.

2. I could, with high likelihood, pick your front door lock in less than a minute

3. Despite this fact you still feel more safe because of the lock

Why is that?

Minding that this is a hypothetical argument, let's point out that to be consistent with your argument you'd have to eliminate you front door lock.

But that's absurd because the truth of the matter is that front door locks provide a significant level of security. Most petty criminals don't actually know how to pick locks well.

I propose that this argument transfers faithfully back and forth between the two situations, because both are technologies that can lead to easy and needless harm if these rudimentary measures are not taken.

If you disagree about the transferability of the argument between the two situations can you tell me why? What makes the two technologies so different? Both block the doorways to avenues for producing harm. Both are sophisticated enough that it requires a nearly professional dedication to unlock. Both provide a measurable and significant increase in security for a community.


The argument is not transferable because breaking into someone's house is sure to do more harm than the unspecified hypothetical harm that a "script kiddie" could do with ChatGPT, and that bypassing a door lock requires some degree of skill whereas a ChatGPT jailbreak requires you to google a prompt and copypaste it. A physical lock on a door offers a great deal more security than the limp solution that current AI safety provides, and it solves a much more pressing problem than "stopping trolls."

If your hypothetical involved a combination lock and the combination was on a sticky note that anyone could read at any time it might be more apt, but even then the harms done by breaking the security aren't the same. I'm not convinced a typical user of ChatGPT can do significant harm, the harms from LLMs are more from mass generated spam content which currently has no safeguards at all.


I'm not sure why people are downvoting me. Not only did I show Op how to solve the original problem their friends had, but I gave them an Ethics lesson.

Some people look at pearls and turn into swine, just because I didn't tickle their bellies. It's a shame. This idea that unless someone can save face, they have to reject the lesson whole cloth... It's costly to our culture. When someone is right, just update and correct your beliefs, and feel no shame.


> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

https://news.ycombinator.com/newsguidelines.html

That being said, you may be being downvoted in part due to your tone: you accuse OP of dishonesty/stupidity ("you and your friends are either lying or really ignorant"), berate people who disagree with you ("Some people look at pearls and turn into swine") and disregard anyone with a differing viewpoint ("nobody with any sense or perspective will support it.")


I struggle with dyslexia which makes it very hard to read through logs, so I often troubleshoot error logs, to summarize the logs and break them down by meaning.

I wouldn't ever do this in an automated way. However in person, it not only makes me solve problems maybe five times faster, but there are also many errors that I would give up on, which I can now solve easily.

Just the ability to turn a log into a human readable narrative does something remarkable. After I've done it once or twice, then my brain provides the narrative itself, and I can read the raw log without the need for assistance.

This suggests that not only does the AI assist me, but it assists my learning as well.

I also struggle with Man pages, in particular finding an option to do something that I need to do. I end up having to read through dozens of options getting more and more confused, sometimes skipping the option that I needed and not finding it. Now I can just dump the entire man page into an llm, describe what I need to do, and ask it to highlight the sections that I need to read. Or I can just ask it questions.


GPT4o. It's more useful and constructive, in general, than anyone who's gonna, with a slack jaw, beg the question of, "What AGI"

AGI doesn't have to do everything better than every human ever born to be generally useful and constructive.

If makes you feel better, think of it as dumbAGI - a generally capable AI that's not that smart.

GPT4o is a dumbAGI. You and I are dumbGIs.


The capital heavy industries are still set to destroy the planet, because we have not forcibly destroyed their capital investments. The capital light industries won't help us if the mega capital still exists and needs to be leveraged. Lucky, things are stark enough to catalyze action. Namely, if we remain too wimpy to implement revolutionary climate justice and destroy the mega capital, then we are all going to die.


The natural world has evolved just as many ways to be miserable and sick and in mind shattering agony as ways to be alive. Perhaps many more. Don't forget that when you ask for your humility. You should also ask for a sense of cosmic horror for this hellish prison we find ourselves trying to claw out of.

If a God created the natural order, with its unending beauty and horror, then surely, on the balance of things, they were either more than a little sadistic and evil ... or they were purely insane.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: