Hacker News new | past | comments | ask | show | jobs | submit login

As a person who did a PhD in CFD, I must admit I never encountered the vorticity confinement method and curl-noise turbulence. I guess you learn something new every day!

Also, in industrial CFD, where the Reynolds numbers are higher you'd never want something like counteracting artificial dissipation of the numerical method by trying to applying noise. In fact, quite often people want artificial dissipation to stabilize high Re simulations! Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.




> Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

The first rule of real-time computer graphics has essentially always been "Cheat as much as you can get away with (and usually, even if you can't)." Also, it doesn't even have to look right, it just has to look cool! =)


“Everything is smoke and mirrors in computer graphics - especially the smoke and mirrors!”


This was the big realization for me when I got into graphics - everything on the screen is a lie, and the gestalt is an even bigger lie. It feels similar to how I would imagine it feels to be a well-informed illusionist - the fun isn’t spoiled for me when seeing how the sausage is made - I just appreciate it on more levels.


My favorite example of the lie-concealed-within-the-lie is the ‘Half-Life Alyx Bottles’ thing.

Like, watch this video: https://www.youtube.com/watch?v=9XWxsJKpYYI

This is a whole story about how the liquids in the bottles ‘isn’t really there’ and how it’s not a ‘real physics simulation’ - all just completely ignoring that none of this is real.

There is a sense in which the bottles in half-life Alyx are ‘fake’ - that they sort of have a magic painting on the outside of them that makes them look like they’re full of liquid and that they’re transparent. But there’s also a sense in which the bottles are real and the world outside them is fake. And another sense in which it’s all just tricks to decide what pixels should be what color 90 times a second.


I want to see that shader. How is sloshing implemented? Is the volume of the bottle computed on every frame?

Clearly, there's some sort of a physics simulation going on there, preserving the volume, some momentum, and taking gravity into account. That the result is being rendered over the shader pipeline rather than the triangle one doesn't make it any more or less "real" than the rest of the game. It's a lie only if the entire game is a lie.


Is it really doing any sloshing though? Isn't it "just" using a plane as the surface of the liquid? And then adding a bunch of other effects, like bubbles, to give the impression of sloshing?


This is the perfect example of what I meant. So many great quotes in this about the tricks being stupid and also Math. Also the acknowledgment that it’s not about getting it right it’s about getting you to believe it.


At some point every render-engine builder goes through the exercise of imagining purely physically-modeled photonic simulation. How soon one gives up on this computationally intractable task with limited marginal return on investment is a signifier of wisdom/exhaustion.

And, yes, I've gone way too far down this road in the past.


I have heard the hope expressed, that quantum computers might solve that one day, but I believe it, once I see it.

Till then, I have some hope, that native support for raytracing on the GPU will allow for more possibilities ..


Not being a graphics person, is this what hardware ray tracing is, or is that something different?


Rayteacing doesn't simulate light, it simulates a very primitive idea of light. There's no diffraction, no interference patterns. You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Our universe has a surprising amount of detail. We can't even simulate the simplest molecular interactions fully. Even a collision of two hydrogen atoms is too hard - time resolution and space resolution is insanely high, if not infinite.


And what’s more, it simulates a very primitive idea of matter! All geometry is mathematically precise, still made of flat triangles that only approximate curved surfaces, every edge and corner is infinitely sharp, and everything related to materials and interaction with light (diffuse shading, specular highlights, surface roughness, bumpiness, transparency/translucency, snd so on) are still the simplest possible models that five a somewhat plausible look, raytraced or not.


Btw there is some movement towards including wave-optics into graphics instead of traditional ray-optics: https://ssteinberg.xyz/2023/03/27/rtplt/


This is very cool. Thx for sharing


Raytracing simulates geometrical optics. That it doesn't take interference pattern into account is therefore a mathematical limitation and of course true, but irrelevant for most applications.

There are other effects (most notable volume scattering), which could be significant for the rendered image and which are simulatable with raytracing, but are usually neglected for various reasons, often because they are computationally expensive.


> You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Afaik you can't even simulate double-slit in high-end offline VFX renderers without custom programming


I wanted to make a dry joke about renderers supporting the double slit experiment, but in a sense you beat me to it.


There sure has been a lot of slits simulated in Blender.


Even the ray tracing / path tracing is half-fake these days cause it's faster to upscale and interpolate frames with neural nets. But yeah in theory you can simulate light realistically


It’s still a model at the end of the day. Material properties like roughness are approximated with numerical values instead of being physical features.

Also light is REALLY complicated when you get close to a surface. A light simulation that properly handles refraction, diffraction, elastic and inelastic scattering, and anisotropic material properties would be very difficult to build and run. It’s much easier to use material values found from experimental results.


If I understood Feynman’s QED at all, light gets quite simple once you get close enough to the surface. ;) Isn’t the idea was that everything’s a mirror? It sounds like all the complexity comes entirely from all the surface variation - a cracked or ground up mirror is still a mirror at a smaller scale but has a complex aggregate behavior at a larger scale. Brian Green’s string theory talks also send the same message, more or less.


Sure, light gets quite simple as long as you can evaluate path integrals that integrate over the literally infinite possible paths that each contributing photon could possibly take!

Also, light may be simple but light interaction with electrons (ie. matter) is a very different story!


Don't the different phases from all the random path more or less cancel out , and significant additions of phases only come from paths near the "classical" path? I wonder if this reduction would still be tractable on gpus to simulate diffraction


That’s what I remember from QED, the integrals all collapse to something that looks like a small finite-width Dirac impulse around the mirror direction. So the derivation is interesting and would be hard to simulate, but we can approximate the outcome computationally with extremely simple shortcuts. (Of course, with a long list of simplifying assumptions… some materials, some of the known effects, visible light, reasonable image resolutions, usually 3-channel colors, etc. etc.)

I just googled and there’s a ton of different papers on doing diffraction in the last 20 years, more that I expected. I watched a talk on this particular one last year: https://ssteinberg.xyz/2023/03/27/rtplt/


Orbital shapes and energies influence how light interacts with a material. Yes, once you get to QED, it’s simple. Before that is a whole bunch of layers of QFT to determine where the electrons are, and their energy. From that, there are many emergent behaviors.

Also QED is still a model. If you want to simulate every photon action, might as well build a universe.


Had the same realization that game engines were closer to theater stages than reality after a year of study. The term “scene” should have tipped me off to that fact sooner.


A VP at Nvidia has a cool (marketing) line about this, regarding frame generation of "fake" frames. Because DLSS is trained on "real" raytraces images, they are more real than conventional game graphics.

So the non-generated frames are the "fake frames".


> Also, it doesn't even have to look right, it just has to look cool! =)

The refrain is: Plausible. Not realistic.

Doesn't have to be correct. Just needs to lead the audience to accept it.

Yeah. And, cheat as much as you can. And, then just a little more ;)


Why not cheat? I'm not looking for realism in games, I'm looking for escapism and to have fun.


It's kinda cool when it feels real enough to be used a gameplay mechanism, such as diverting rivers and building hydroelectric dams in the beaver engineering simulator Timberborn: https://www.gamedeveloper.com/design/deep-dive-timberborn-s-...


The reason not to cheat is that visual artifacts and bugs can snap you out of immersion. Think of realizing that you don't appear in a mirror or that throwing a torch into a dark corner doesn't light it up. Even without "bugs" people tend to find more beautiful and accurate things more immersive. So if you want escapism, graphics that match the physics that you are used to day-to-day can help you forget that you are in a simulation.

The reason to cheat is that we currently don't have the hardware or software techniques to physically simulate a virtual world in real-time.


I didn't mean to imply that cheating is bad. Indeed it's mandatory if you want a remotely real-time performance.


I don't think this is a great argument because everybody is looking for some level of realism in games, but you may want less than many others. Without any, you'd have no intuitive behaviors and the controls would make no sense.

I'm not saying this just to be pedantic - my point is that some people do want some games to have very high levels of realism.


Some of the best games I've played in my life were the text based and pixel art games in MS DOS. Your imagination then had to render or enhance the visuals, and it was pretty cool to come up with the pictures in your head.


I realize the thread started about graphics, but I didn't only mean realistic graphics when I referred to realism, because the comment I was replying to just said "realism in games". I do expect there's some degree of realism in your favorite text-based or pixel-art games as well. As you say, it's clear that a game doesn't need photorealistic graphics to be good, because most games do not.


I mean yes, but, ultimately we want to be able to simulate ourselves to the truest to be able to understand our origins, which are likely simulated as well, right.


This reminds me of how the Quake-3-based game Tremulous has just two functions to simulate physics:

https://github.com/darklegion/tremulous/blob/master/src/game...


> ... the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

That is exactly correct. That said, as something of a physics nerd (it was a big part of the EE curriculum in school) people often chuckled at me in the arcade pointing out things which were violating various principles of physics :-). And one of the fun things was listening to a translated interview with the physicist who worked on Mario World at Nintendo who stressed that while the physics of Mario's world were not the same as "real" physics, they were consistent and had rules just like real physics did, and how that was important for players to understand what they could and could not do in the game (and how they might solve a puzzle in the game).


Yes, consistency. Similar in scifi/fantasy, where an absurd conceit is believable if internally consistent, i.e. self-consistent. (I also call it principled) Too much ad hoc prevents suspension of disbelief.

Reactions are an important part of this, it's not just how good an actor is, but how other character react to them. If they are ignored, it's like that character doesn't exist, has no effect, is inconsequential, doesn't matter... is not real.

In a fluid simulation, when water does not react to the shoreline, rocks in the water, columns supporting a pier etc, it is like the water (or the obstacles) don't exist. In this sense, the water (or the obstacles) "aren't real".

For example, the water in Sea of Thieves looks magnificent... but because it doesn't interact (it's procedural), it doesn't feel real. It isn't real.

A further problem here with water is that it's difficult to make a continuous fluid that is consistent and isn't realistic, because the basic principles of fluid are so very very simple: conservation of mass, conservation of momentum.

[ Of course, you can still vary density, gravity. There are other effects beyond these: viscosity, surface tension. Also, compressible fluids, different reynolds numbers, adiabatic effects, behaviour at non-everyday temperatures, pressures, scales, velocities, accelerations etc. ]

There are also simplifications, like depth-averaging (Shallow Water Equations aka St Venant), but again, it's hard to vary it and yet remain self-consistent.

Cellular methods - similar to conway's game of life but for fluid - maybe are an exception, because thet are self-consistent - but pretty far from "water", because not lack momentum (and aren't continuous).

The final issue is error: simulations are never perfectly self-consistent anyway, because they must discretize the continuous, which introduces error. In engineering, you can reduce this error with a finer grid, until it too small to matter for your specific application. For computer graphics, it only needs to be perceivably" self-consistent - and perhaps pixel size is one measure of this, for the appearance* of consistency, in area/volume, acceleration, velocity, displacement (...though, we can perceive sub-pixel displacement, e.g. an aliased line.)


An imperceptible inconsistency can become perceptible if it is cumulative, such as mass not being conserved very slightly, but over time the amount of water changes significantly - one ad hoc solution is to redistribute the missing water (or remove surplus) everywhere, to compensate.


I think the curl noise paper is from 2007: https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...

I've used the basic idea from that paper to make a surprisingly decent program to create gas-giant planet textures: https://github.com/smcameron/gaseous-giganticus


Hey that paper references me. ;) I published basic curl noise a few years before that in a Siggraph course with Joe Kniss. Bridson’s very cool paper makes curl noise much more controllable by adding the ability to insert and design boundary conditions, in order words, you can “paint” the noise field and put objects into the noise field and have particles advect around them. Mine and Joe’s version was a turbulence field based on simply taking the curl of a noise field because curl has the property of being incompressible. Thought about it after watching some effects breakdown on X-men’s Nightcrawler teleport effect and they talked about using a fluid simulation IIRC to get the nice subtleties that incompressible “divergence-free” flows give you. BTW I don’t remember exactly how they calculated their turbulence, I have a memory of it being more complicated than curl noise, but maybe they came up with the idea, or maybe it predates X-men too; it’s a very simple idea based on known math, fun and effective for fake fluid simulation.

We liked to call it “curly noise” when we first did it, and I used it on some shots and shared it with other effects animators at DreamWorks at the time. Actually the very first name we used was “curl of noise” because the implementation was literally curl(noise(x)), but curly noise sounded better/cuter. Curly noise is neat because it’s static and analytic, so you can do a fluid-like looking animation sequence with every frame independently. You don’t need to simulate frame 99 in order to render frame 100, you can send all your frames to the farm to render independently. On the other hand, one thing that’s funny about curly noise is that it’s way more expensive to evaluate at a point in space than a voxel grid fluid update step, at least when using Perlin noise which is what I started with. (Curly noise was cheaper than using PDI’s (Nick Foster’s) production fluid solver at the time, but I think the Stam semi-Lagrangian advection thing started making it’s way around and generally changed things soon after that.)

BTW gaseous giganticus looks really neat! I browsed the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?


> the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?

In 2014 when I wrote it, 3D Perlin noise was still patent encumbered. Luckily at the same time I was working on it, reddit user KdotJPG posted a Java implementation of his Open Simplex noise algorithm on r/proceduralgeneration (different than Ken Perlin's Simplex noise), and I ported that to C. But yeah, I think Perlin is a little faster to compute. I think the patent expired just last year.

Also Jan Wedekind recently implemented something pretty similar to gaseous-giganticus, except instead of doing it on the CPU like I did, managed to get it onto the GPU, described here: https://www.wedesoft.de/software/2023/03/20/procedural-globa...


reminds me of this - https://www.taron.de/forum/viewtopic.php?f=4&t=4

it's a painting program where the paint can be moved around with a similar fluid simulation


This is unfortunately a discussion I've had to have many times. "Why does your CFD software take hours to complete a simulation when Unreal Engine can do it in seconds" has been asked more than once.


Too bad they don't reference the actual inventor of vorticity confinement, Dr. John Steinhoff of the University of Tennessee Space Institute:

https://en.wikipedia.org/wiki/John_Steinhoff

https://en.wikipedia.org/wiki/Vorticity_confinement

And some papers:

https://www.researchgate.net/publication/239547604_Modificat...

https://www.researchgate.net/publication/265066926_Computati...


Maybe you're not the right person to ask, but I'll go anyway: I would like to learn the basics of CFD not because I expect to do much CFD in life, but because I believe the stuff I would have to learn in order to be able to understand CFD are very useful in other domains.

The problem is my analysis is very weak. My knowledge about linear algebra, differential equations, numerical methods, and so on is approximately limited at the level of an introductory university course. What would you suggest as a good start?

I like reading, but I also like practical exercises. The books I tried to read before to get into CFD were lacking in practical exercises, and when I tried to invent my own exercises, I didn't manage to adapt them to my current level of knowledge, so they were either too hard or too easy. (Consequently, they didn't advance my understanding much.)



This does look really good at a first glance. It seems like it uses mathematics that I'm not fully comfortable with yet -- but also takes a slow and intuitive enough approach that I may be able to infer the missing knowledge with some effort. I'll give it a shot! Big thanks.


You'll be able to understand the equations I guess. The hard part is the numerical analysis: how do you prove your computations will: 1/ reach a solution (badly managed computations will diverge and never reach any solution) 2/ reach a solution that is close to reality ?

For me that's the hard part (which I still don't get).

You could start with Saint Venant equations, although they look complicated they're actually within reach. But you'll have to understand the physics behind first (conservation of mass, of quantity of movement, etc.)


Regarding your two questions, some terms you could look up are: Courant number, von Neumann stability analysis, Kolmogorov length and time scales

With respect to 2, the standard industry practice is a mesh convergence study and comparing your solver's output to experimental data. Sadly, especially with Reynolds-averaged Navier Stokes, there is no guarantee you'll get a physically correct solution.


Yeah, I know but still had no time to dive into the theory enough to get a correct intuition of how it works on why there's no guarantee for a physically correct solution... Fortunately, my colleagues are the professors in my university who teach these, so I'll find an answer :-)


Understand the equations, yes. However, I'm sufficiently out of practise that it takes me a lot of effort to. So I guess you could say I'm not fluent to be able to grasp the meaning of these things as fast as I think I would need to in order to properly understand them.


I went to a talk by a guy who worked in this space in the film industry and said one of the funniest questions he ever got asked was "Can you make the water look more splashy"


Was the PhD worth it in your opinion?


From a purely economic standpoint, difficult to say. I was able to build skills that are very in demand in certain technical areas, and breaking into these areas is notoriously difficult otherwise. On the other hand, I earned peanuts for many years. It'll probably take some time for it to pay off.

That said, never do a PhD for economic reasons alone. It's a period in your life where you are given an opportunity to build up any idea you would like. I enjoyed that aspect very much and hence do not regret it one bit.

On the other hand, I also found out that academia sucks so I now work in a completely different field. Unfortunately it's very difficult to find out whether you'll like academia short of doing a PhD, so you should always go into it with a plan B in the back of your head.


I feel this comment. I did a masters with a thesis option because I was not hurting for money with TA and side business income, so figured I could take the extra year (it was an accelerated masters). Loved being able to work in that heady material, but disliked some parts of the academic environment. Was glad I could see it with less time and stress than a PhD. Even so, I still never say never for a PhD, but it’d have to be a perfect confluence of conditions.


> On the other hand, I also found out that academia sucks

I also found that I don't want to be a good academic, but a good researcher. Most importantly, that these are two very different things.


What does worth it mean? FWIW PhDs are really more or less required if you want to teach at a university, or do academic research, or do private/industrial research. Most companies that hire researchers look primarily for PhDs. The other reason would be to get to have 5 years to explore and study some subject in depth and become an expert, but that tends to go hand-in-hand with wanting to do research as a career. If you don’t want to teach college or do research, then PhD might be a questionable choice, and if you do it’s not really a choice, it’s just a prerequisite.

The two sibling comments both mention the economics isn’t guaranteed - and of course nothing is guaranteed on a case by case basis. However, you should be aware that people with PhDs in the US earn an average of around 1.5x more than people with bachelor’s degrees, and the income average for advanced degrees is around 3x higher than people without a 4-year degree. Patents are pretty well dominated by people with PhDs. There’s lots of stats on all this, but I learned it specifically from a report by the St Louis Fed that examined both the incomes as well as the savings rates for people with different educational attainment levels, and also looked at the changing trends over the years, and the differences by race. Statistically speaking, the economics absolutely favor getting an advanced degree of some kind, whether it’s a PhD or MD or JD or whatever.


Hard to say whether economically a PhD always makes sense, but it certainly can open doors that are otherwise firmly closed.


Well, sure. Everything you end up doing in life will open doors that would have otherwise remained firmly closed if you did something else instead.


"Everything you end up doing in life will open doors that would have otherwise remained firmly closed"

Oh no. It is also quite possible to do things, that will very firmly close doors, that were open before, or making sure some doors will never open ..

(quite some doors are also not worth going through)


Of course. In fact, it is widely recognized that in a poor economy PhDs can be in an especially bad economic place as any jobs that are available deem them "overqualified".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: