Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Comment the title is base64 to avoid spoilers because ChatGPT does not recognize what book this is.


Good idea!

Um9iZXJ0IEogU2F3eWVyIC0gU3RhcnBsZXgK


Holy cow. ChatGPT 4 actually decoded this. It went into analysis mode, wrote some python, ran it, and gave the correct answer.


Did you ask it to create code for this?

Me: Um9iZXJ0IEogU2F3eWVyIC0gU3RhcnBsZXgK ChatGPT: "xxxx" is a science fiction novel by Robert x xxxx. It explores themes of discovery, the nature of the universe, and the potential for cooperation among diverse life forms. The story revolves around a space station, xxxxxx, and its crew as they encounter mysterious wormholes, alien species, and cosmic phenomena, challenging their understanding of the universe and their place within it.

ps. I don't know what is there to spoil by sharing a book title but whatever :)


I'm under the impression that the spoiler is what they said about the gigastructure of galaxies. If you don't know the title, you can't related it to the spoiler they just told you


Ah, makes sense. I'm slow.


Base64 encoding is a common way of jailbreaking LLMS. The llm just deals with vectorspaces so to it, base64 is just another language for the encoding/tokenization layer to learn.


Yeah - I’m just shocked I didn’t get a hallucinated response for the query.


I wonder what other code you could get it to execute for free.


It can also decode w/o analysis mode. Try GPT Classic


Gemini does too, not sure that's all too surprising


Why are you surprised? Is base64 hard for chatgpt, or am I missing something?


I basically said “decode this thing.”

I’m just surprised it hit all the steps properly rather than hallucinating a response.


Its base64. Its not an encryption.


Sure, but it’s pretty amazing to me that ChatGPT didn’t just hallucinate a response to a generic request to decode a string. It recognized the string as base64, wrote a valid program to decode it, and returned the correct response.

Maybe I’m just old and amazed, but that seems pretty cool (terrifying?) to me.


What's terrifying about it? Serious question. Lately people are associating all sorts of doom to LLMs so I'm curious to understand.


I’m not sure how I feel yet. Rapid rate of change can uproot systems pretty quickly. I guess I’m just holding judgement to see if this is a new Industrial Revolution, and fallout that might occur. Ideally this wouldn’t be a worry, but we don’t live in an ideal world.

I personally use AI as a tool, and feel more productive FWIW.


If that's terrifying to you, definitely do not check out Sora by OpenAI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: