Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a trick to get output like this? I have been trying out the different models/sites/oss setups when they show up here and I get _nothing_ like this. I dont even get the images others use as examples. I felt like I am obviously doing it wrong. I try example prompts and prompts people suggest they used and I still get images that are as bad as if I tried to make them myself. To me this is crazy and almost unbelievable that it was generated, mostly because my experience has been completely opposite. Im on the fence about the artiest debate, but seeing this, I think its more the ones that keep their jobs are the ones who gain a new skill as prompt artiest*. To me that seems like a very valuable and real skill going forward.

[*] kinda like early 'web designers' that needed to learn javascript



Yeah, so specifically for Midjourney, but the others should have something similar:

0. Read the manual thoroughly, don’t speed read or skim.

1. Set the seed so that it’s only your prompt that changes while iterating on it “——seed **”

2. Set stylize to a low number “—-stylize 200”

3. Build the scene up by adding one part at a time (use a thesaurus to try out different words)

4. Weight the different prompt segments “SOME WORDS::10”, try to make the weights add up to 100% for easy math

5. Use negative weights to remove things you don’t want “BAD THING::-1”

6. Once you have zeroed in on a good prompt then increase the quality “—-quality 2”, try out different stylize values, and start rolling the seed many times.

7. Expect it to take 40+ iterations to get a good prompt and maybe another 20+ rolls to get a good seed. This can be less once you get good, or more if go for more advanced scenes. Your first few good images should take a few hours to get.


Thank you for this; and setting proper expectations is also very helpful. It honestly should be in the marketing material for these things, they make it sound like anyone is able to load it up for the first time and type a few words in to get world class art at the ready.

Reminds me of trying to get people into programming. They are shown a hello world, think that looks easy and fun then try to make something useful shortly before rage quitting


I would recommend you to lurk around /r/StableDiffusion


What exactly are you struggling with? The main challenge with Midjourney is to open up a broader style pallete, while the out of the box vanilla results are usually amazing in v4.


Had not tried Midjourney, probably as I dont usually bother with discord. Just tried it out and while the results are probably better than anything I had before, there are a lot of weird things going on (per my usual experience) that dont seem to appear in examples other peoples use and def not in the topic post; distorted faces, extra body parts, weird 'deformities', proportions etc... Here are both variations and upscales of the same prompt (-> which is probably my issue?). This was without reading the 'tips' or looking through /r/StableDiffusion yet. It still really seems like prompting is an art form and skill that really needs to be learned, or atleast have the skills to touch-up the images.

::a happy family in the middle of a nucular wasteland:: (I just realized I spelt nuclear wrong, does that matter?)

[original] https://cdn.midjourney.com/2685df56-6e1a-4828-9bd5-2960271d2...

[upscaled] https://cdn.midjourney.com/b7363830-0a77-4362-86bb-a1bc30ff7...

[variation] https://cdn.midjourney.com/cf9d990b-0745-4ef5-b977-31cbd0efc...

[variations] https://cdn.midjourney.com/7337ed7f-041b-43d0-8626-2b3992fbd...

[upscaled] https://cdn.midjourney.com/a29c53da-e9cf-4192-88cd-d56d4e2d3...

[variations]https://cdn.midjourney.com/497b5044-605c-42f8-b285-868515211...

[upscaled] https://cdn.midjourney.com/dcdc1eb6-c5a0-4df9-9cc9-93fa8cf12...

[variations] https://cdn.midjourney.com/46822f7a-b90f-4119-9cee-4f3967dc3...


That's... not close to enough in your prompt. I suggest you go browse the v4-showcase channel and look at the prompts other people are using (users will sometimes share). Alternatively go to a site like prompthero.com and look up some examples. At a minimum, you need to use the --v 4 trigger to get it to use the new (better) model. You also generally need to tell it what style you want with some more description.

Fwiw, I've found stable diffusion outputs to be mostly crap in comparison to DALL-E and Midjourney. DALL-E seems to output results closest to what you ask for with low-specificity prompts, and Midjourney tends to give the most "stylistic" results like what you would see on a digital art site.


V4 is the default at the moment, so you shouldn’t have to specify it.

Also you can set your personal settings with “/settings”


Ah, thanks, been a couple weeks since I used it.


If you look at civitai then you'll see that the good images often have massive negative prompts that literally say things like "missing fingers:-1" or equivalent.


I had this experience with midjourney until I found out there were certain keywords I had to include to enable their newer engines




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: