Hacker News new | past | comments | ask | show | jobs | submit login
Simulating Jupiter (emildziewanowski.com)
362 points by imadr 11 days ago | hide | past | favorite | 37 comments





When I was at university, we did a different Jupiter simulation - the whole planet. We were fortunate enough to have a comet smack into it and ring it like a bell.

Then a few of my senior colleagues used the observations in asteroseismology models (a generalised helioseismology model really) to study the interior.

https://en.wikipedia.org/wiki/Asteroseismology

https://en.wikipedia.org/wiki/Comet_Shoemaker%E2%80%93Levy_9...


Jupiter is basically a big broom for Sol system. It's quite a nice GSV, don't you think?

This article was a joy to read, both for explanations and visuals. I'm not knowledgeable at all in visual generatio but I'm now wondering about other uses to extend the method.

What other shapes can be coupled (with the technique to create those various storms) in order to create large-scale transitions, where for example a large vortex would follow a sigmoid over the other zones.

Or even in what subtle ways could the visuals follow the envelope of a Hans-Zimmeresque audio background..

Thanks for having shared this blog!


Hello everyone! I'm the author of the article. First of all, thank you so much for sharing it here. I've been taking note of the feedback - I'll try to fix the issue with contrast and other UX problems. If there are any specific suggestions or further feedback you have, please feel free to reach out to me. Thanks again for taking the time to read and share the article!

Fluid mechanics guy here. Let me first say this looks really nice overall!

The part that has probably highest potential for improvement is the sharpening, the artifacts there look a bit weird still.

Physically speaking, what you see on Jupiter (and on a river) is an interfacial flow. There is a divergence-free bulk flow underneath, but the interfacial flow itself has a lot of divergence. Upwellings have positive divergence and supply fresh stuff (colour!), downdrafts have negative divergence and consume stuff/colour.

But wait! You are using curl noise for your vector field! Of course the divergence is then zero everywhere!

If you take just the gradient of the scalar noise field you use for your curl noise, this will have lots of divergence and "compatible shape". Just scale this down a bit and mix with your curl noise.

And then finally take the value of your scalar noise field, scale it to be symmetric around zero, and use this to determine how much color to add/remove.

I think this will remove your need for sharpening entirely.

Disclaimer: this is just top-of-my-head while walking home.


Really great observations - thank you! I already use the method you described - curl is mixed with some amount of gradient to artificially bring color from the bottom layers. It can be observed at the center of the red cyclone in the last YT clip. Keep in mind - i wasn't going for true fluid mechanics - I just used some of the flow patterns observed in real fluids and layered them on top of each other to give the illusion of a more complex behavior. As for the sharpening - it is used to counteract the blurring effect of interpolating the color texture every frame.

Nice work. You briefly mentioned curl noise... About 10 years ago I wrote gaseous-giganticus[1] which uses curl noise to create gas-giant planet textures. They don't move, like yours, but don't look too bad (and looking at Jupiter, you can't really see that move over small time scales anyway.) Some animation is possible[2] with gaseous-giganticus, but not in real time, as it's all done on the CPU, and it doesn't really sustain over time, as it starts off looking very fuzzy, resolves into something pretty nice, then gets weird. Here is some more output from early days: https://imgur.com/a/9LipP

Here are some slides about the development of gaseous-giganticus (best viewed with a real computer, not on a phone, as it uses arrow keys to navigate the slides): http://smcameron.github.io/space-nerds-in-space/gaseous-giga...

[1] https://github.com/smcameron/gaseous-giganticus [2] https://imgur.com/mqCwMeI


Really cool - thanks for sharing! I thought about using cubemap to have the whole planet simulated but, since I only use the effect as a part of a skybox, it would be wasteful. You also use particles instead of textures. Are you familiar with the work of Larry Yaeger and Craig Upson? They created Jupiter for "2010", and used similiar, particle based approach.

I am aware of the existence of that work, but was never able to find any details about it.


Very interesting work! The end result looks fantastic.

On a related note, here's an experiment I did using fluids in Maya to create a closeup of Jupiters bands. It was created while I worked at a planetarium - https://thefulldomeblog.com/2014/01/30/jupiter-bands-simulat...


Props for a site of that visual complexity that was performant, visually appealing, and eminently readable on mobile.

> performant

Huh. Opening this webpage on Firefox floored my laptop (8 core 16GB). The lag was several seconds, including for clicking "back" or opening a new tab.


Follow-up: this only seems to be the case when the "Animated Great Red Spot" image is in view.

May I ask what GPU do you have?

512MB ATI AMD Radeon Graphics

I'm afraid the only thing I can do in such case is to display a static image instead of a shader. Would you prefer that?

I was able to view it smoothly on my phone, so I'm not too fussed, but that might be a better experience for anyone else that has the same issue in future.

It opened instantly and worked smoothly in Firefox on my 8 year old android.

Yes, on my phone it's fine but on my laptop it's a nightmare.

The use of weird non-native scrolling really hurts navigation and full justification looks clumsy when screen is narrow. But otherwisr it's not terrible.

The article is incredibly interesting, but the choice of colors is so low-contrast that I can only read in it "reader mode", where the animations don't work. I have resorted to "select all" where the letters stand out a bit, but it's ugly and not very ergonomic...

If the consensus is that the mobile color scheme is better than the desktop one I can just change it

It's white on black, or at least white on very dark gray. Contrast is about as high as it could be on my device.

Might there be a problem with your device?


> It's white on black, or at least white on very dark gray.

It's light grey (#666b67) on dark grey (#222623), not much contrast on desktop. Mobile uses other colours, the same background (#222623) but a lighter font color (#B2B5B3), which is _way_ better.

Why not use the same foreground color on desktop?


> eminently readable on mobile

Sadly the font colour on non-mobile devices is way too dark, the whole site is way too low contrast: #666b67 (desktop) vs #B2B5B3 (mobile) on #222623.

Desktop colours: https://webaim.org/resources/contrastchecker/?fcolor=666B67&...

Mobile colours: https://webaim.org/resources/contrastchecker/?fcolor=B2B5B3&...


The author seems to be experimenting in UE4 or UE5 (material graph shown in screenshot), but the examples are displayed in sharedtoy embeds?

I'm wondering, is there a direct way to save UE4 material shader to shadertoy or some easy conversion tool? Otherwise it would have taken eons to produce this page...


UE translates shader graphs to HLSL - high level shading language, see:

https://dev.epicgames.com/community/learning/knowledge-base/...

Shadertoy needs GLSL - open gl shading language. Luckily, UE has a HLSL -> GLSL transpiler built in:

https://docs.unrealengine.com/4.27/en-US/ProgrammingAndScrip...

There are other HLSL transpilers: Microsoft's ShaderConductor, Unity's hlsl2glsl, Vulkan's vcc, etc.

To port your favorite Shadertoy examples back to UE, you can transpile GLSL to HLSL with ShaderTranspiler, glslcc, ShaderConductor, etc.

Disclaimer: I don't use UE or Shadertoy. In fact, this is my first exposure to GLSL/HLSL. My claims may be inaccurate.


Website acts as my portfolio - I'm a game developer, so that is why I use Unreal material graph. Shadertoy allows me to demonstrate ideas on live example that is animated and anybody can play with its code. For the most part HLSL(Unreal) can be translated to GLSL(Shadertoy), but that wasn't the case here. In Unreal I use my own custom flow textures, in Shadertoy it is not possible - everything has to be stored in code. Even though the basic idea behind Unreal and Shadertoy shaders was the same, the implementations were quite different. It was easier to just do everything twice, that to convert it. And yes - it took a lot of work :).

Looking at the final shadertoy example (https://www.shadertoy.com/view/4XSXz3) I would think he just recreated each effect in shadertoy (variable and function names dont seem exported to me).

Most of the effects on the page are only a couple of lines it seems so maybe he did just rewrite them all? I do wonder why he bothered with UE material graphs if he's this proficient at shaders anyway.


I can imagine using material graphs is a much better way to experiment, iterate and progressively build up the effects than hand coding a shader. It's kind of like asking why write code in C# in Visual Studio when you can just write assembly?

I can almost feel the drops in my hair. But for real this is so cool

0.3 fps...

The other articles on this site are just as fascinating. What a treasure!!

Storm lightning and aurora?

This is really cool!

now this is programming :) thank you!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: