[Home] [Downloads] [Search] [Help/forum]

Gammon Forum

See www.mushclient.com/spam for dealing with forum spam. Please read the MUSHclient FAQ!

[Folder]  Entire forum
-> [Folder]  Programming
. -> [Folder]  General
. . -> [Subject]  Ray tracing, DirectX, OpenGL, graphics, etc.
Home  |  Users  |  Search  |  FAQ
Username:
Register forum user name
Password:
Forgotten password?

Ray tracing, DirectX, OpenGL, graphics, etc.

It is now over 60 days since the last post. This thread is closed.     [Refresh] Refresh page


Pages: 1 2  3  

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Fri 02 Mar 2007 09:35 AM (UTC)

Amended on Sat 10 Mar 2007 08:36 PM (UTC) by Nick Gammon

Message
Continued from thread at:

http://www.gammon.com.au/forum/bbshowpost.php?id=7666

(Nick Gammon)






Yeah, blame David. It was to clearify to him and *is* completely irrelevant to everything else, save for that its an avenue I chose to *intentionally* close, so as not to sink to her level, which makes it harder for me to get a copy at all.

Going back to the real issue.. Yeah, in some cases its just headers. The problem is, in the case of .NET, the only reason you can *still* code for ActiveX is a) people seriously disliked .NET and the projected eventual loss of OLE, so they decided to backpeddle and b) they couldn't get rid of OLE completely *now* without breaking 99% of the existing software. Its pretty damn clear from everything from their press releases to their hype of .NET to their actual removal of wizards and libraries needed to "make" ActiveX controls in the newer compilers, etc., that their intent was, and still may be, to depricate the entire OLE system, in favor of their bloody, "Everything is a internet applet!", architecture. A system that is *only* better in that it helps them and other people make remote servers that host stuff you have to pay usage fees on every time your kid wants to draw a bad picture of a tree or something.

They didn't just make it harder to use OLE, they tried to make it impossible, even to the point of removing references to the whole idea from basic documentation. Drives me nuts. Its like trying to find anything one **real** "ray based" raytracing, now that every damn application on the planet uses either DirectX or OpenGL to funnel triangles into a scanline graphics card... You still want photo realism, you go with raytracing, you want real time graphics that, if you try really damn hard and don't mind needing 4 TB of disk storage for the 3D meshes and textures, you can *almost* get the same result. Want to code something that does the former, instead of the later... Forget it, unless you can find a book on it that hasn't been published in 20 years... :(
[Go to top] top

Posted by David Haley   USA  (3,881 posts)  [Biography] bio
Date Reply #1 on Fri 02 Mar 2007 10:26 AM (UTC)
Message
Quote:
Its like trying to find anything one **real** "ray based" raytracing, now that every damn application on the planet uses either DirectX or OpenGL to funnel triangles into a scanline graphics card... You still want photo realism, you go with raytracing, you want real time graphics that, if you try really damn hard and don't mind needing 4 TB of disk storage for the 3D meshes and textures, you can *almost* get the same result. Want to code something that does the former, instead of the later... Forget it, unless you can find a book on it that hasn't been published in 20 years... :(
Where of all places did this come from? What kind of ray-tracing are you looking for? People use ray-tracing all the time for high-quality rendering. Whole classes are still taught about ray-tracing. I don't know why you say that books haven't been published in 20 years. I just googled "ray tracer" and got all kinds of results, varying from educational to implementations.

Maybe you have a very specific application in mind. I'm not sure why it's so evil to use DirectX or OpenGL to "funnel triangles into a scanline graphics card" -- you almost make the phrase sound dirty or something. Why should programs do software rendering when the 3d card can do it?

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
[Go to top] top

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Reply #2 on Fri 02 Mar 2007 07:00 PM (UTC)
Message
Because, the card ***doesn't*** do it. The card uses cheats and hacks that are useful for real time games and other things, but which cannot produce the same quality or effects. And I have looked for those same websites. Most, if they truely talk about actual raytracing are either a) closed source, b) restrictive in licenses, c) assume you already know most of the vector math, etc, or d) are not actual raytracing sites, but ones that use OpenGL or DirectX.

There is no hardware based system in existing today, with the possible exception of one in development in Sweden or some place, and **none** on the market that can do raytracing. They all use systems that can quickly funnel data to a scanline based engine, but which cannot perform complex things like global illumination (without pre-building the illumination tables), true reflection (especially of off screen objects that don't "exist" in the image once the scanline system sorts out what is visible), and so one. They don't even really do refraction correctly and until DirectX 10 comes out, they won't do focal blur, which is damn near "standard" in ray path tracing systems.

Its real easy to find info on systems that are hardware dependent, and can't do 20-30% of what a raytracer actually does. Finding useful information on the mechanics of the real thing is damn hard, often never covers the most basic principles needed to start coding one, or are so old that the links to the code/software aren't even working anymore.

Now.. There are some like POV-Ray that are exceptions in that you can look at the code, can find the software, etc., but unless you have some idea "how" the thing works in the first place, its not going to help much to look at it. In the end, the "entry level" stuff isn't available any more to the casual interest people like me that either can't afford and/or can't drive 5,000 miles to some place to learn it from some place that does teach it.

In any case, just try to go on some place like the POV-Ray forums with the claim that hardware is better or that its the same thing. More than a few show up there once in a while saying, "Gosh, why don't you modify POV-Ray to use the video hardware for X, Y and Z?" The answer is inevitably, "It won't work.", "The card can't actually do that.", "Cards don't use enough precision in their calculation to be useful, even if it could work.", and so on. They are not designed to do the same thing. But, since the lay person can't tell the difference and the card makers are intentionally **not** pointing out the difference, 90% of the sites for scanline rendering call themselves raytracing applications and get by with it.

Believe me, this is one thing **I** know way more than you about, because I have been reading the forums for a real raytracer for like 10 years and even if I don't know the math needed to make one from scratch, I do know why they are *not* the same thing and why its going to require a hardware implimentations of raytracing algorythms, instead of scanline, to legitimately claim that hardware is "ever" better for anything but fast, cheap, plastic looking graphics. If you want to know the detailed differences, you can try reading some of the threads on why they differ at news.povray.org.

But, this is dragging the thread in a completely pointless direction, so, lets drop it.
[Go to top] top

Posted by David Haley   USA  (3,881 posts)  [Biography] bio
Date Reply #3 on Fri 02 Mar 2007 07:09 PM (UTC)
Message
You are tilting at windmills! You managed to answer lots of questions I did not ask and you completely missed the most important one: what is your area of application??

I never said hardware was better for raytracing! I only said that for many purposes it is fine, and there is no point reinventing the wheel making your own renderer.

This whole argument is pointless, stupid even, until you say what you are trying to DO. I have no idea what your context is, which is why I tried asking you what your area of application is.

Quote:
In the end, the "entry level" stuff isn't available any more to the casual interest people like me that either can't afford and/or can't drive 5,000 miles to some place to learn it from some place that does teach it.
It sounds to me that what you really want is a free lunch. And we all know what they say about free lunches...

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
[Go to top] top

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Reply #4 on Sat 03 Mar 2007 06:19 PM (UTC)
Message
Ok.. My application is:

1. Small footprint for the *scene* files.
2. High detail with minimal (or better *no*) premade tectures.
3. Easy to work with script that is a bit more flexible than POV-Rays current implimentation.
4. I don't even want to see "mesh" as an option, since it breaks rule #1.
5. No license restrictions for use as a library and linking into another application.
6. Exclusion/limits on features that cause long render times.

Now, POV-Ray almost fits this. But... while you can do 1 and 2, 3 won't happen until they start work on the major rewrite they plan. Items 4 and 6 are not possible to because the license requires you to make "all" derived copies work "as per the original", so you can't simply turn off stuff that is going to make the scene file bloat or take massive times to render. And 5... I think there are ways you can sort of side step that, as long as you make certain the only way to use the integrated version is *if* the render window is prominant and displays the licensing for it *before* letting you hide the window or otherwise do something that prevents the user from knowing what is being used.

The point is to provide 3D that can be built and adjusted, on the fly, on the server end, the same way that rooms or mobs might be in a mud, while limiting the user end storage requirements and the time to live (i.e. download sizes) for the data needed to produce the image). Now, a library of prebuilts, like huts, medieval architecture, etc. that can be downloaded by the user to speed this up would be nice, but the point is *still* to use only primitives, so that the scene file is build, for example, in the case of a castle wall, with a few hundred script generated copies of a block or one solid surface with strategic cuts taken out (which would take only maybe 20 objects) and some procedural textures, all of which would take... 100k maybe? This is apposed to what either OpenGL or DirectX games might use for the same thing, which could consist of a 2-3MB mesh and a 1MB image file, not including all other parts used to make the castle.

That clearer? Seriously, the only answer to this that would work with cards would still take a long time to do, since it would require using something like marching cubes to "tessilate" the primitives into meshes the card can deal with, then algorithms to generate the procedural textures into a bitmap the card could them use to paint the object. And that is going to in some cases take *more* time that just parsing a text file and using software to produce an image. So, as I said, to me the best solution is to figure out how to make something like POV-Ray, but without some of the license issues that make using it in such a context pointless.
[Go to top] top

Posted by David Haley   USA  (3,881 posts)  [Biography] bio
Date Reply #5 on Sat 03 Mar 2007 09:10 PM (UTC)
Message
That's not an application, it's a list of requirements. What are you doing -- are you trying to do real-time graphics? Are you trying to do one-off rendering? Photo-realistic rendering?

Quote:
This is apposed to what either OpenGL or DirectX games might use for the same thing, which could consist of a 2-3MB mesh and a 1MB image file, not including all other parts used to make the castle.
I don't understand what you're doing. You're talking about a server providing 3d graphics to a client. What does that even mean? Is the server sending some kind of procedural representation that the client then renders? Or is the server sending the actual picture?

I don't see why you think it's impossible to describe the data in a very compact form for OpenGL or DirectX. Since I don't know exactly what you're doing it's hard to tell for sure, but it seems to me that you could do precisely the same thing for OpenGL: you could describe something in terms of primitives, then when you tell the client to render it, it takes those primitives and renders each one in turn, building up the bigger picture.

Anyhow it's really hard to have a meaningful conversation since I have no clue what you're actually planning on doing (but at least I know what your requirements are...)

Still, I'm wondering, since this is going rather far off topic, it might be more appropriate to just drop the topic. (Of course, part of me still wants to know what exactly you're talking about because as stated it makes no sense to me! :P)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
[Go to top] top

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Reply #6 on Sun 04 Mar 2007 06:19 PM (UTC)
Message
Look. There are basic facts about scanline and raytrace that make them non-interchangable. For example, yes, OpenGL has primitives, but it can't handle CSG, where you use one primitive to "cut" another, not without *extra* processing, and in some cases it just can't at all. The conversion from a primitive to what a card can handle requires that it "approximate" the result, this means wasting processing time you would have used anyway to produce a purely mathematical result and instead making a "close to the same" mesh, which then still have to be rendered in on the card. Your doing extra work for no real gain. Even if the card itself supports the primitives, that just means the card has to waste time doing it.

Basically, I could care less about real time. A render of 2-3 seconds wouldn't hurt anything, but more than that... That is why I would want to place limits on how certain things are handled, or, alternatively, use a multi-pass system, so that the first pass gives an approximation of the image, then later passes, if they are allowed to happen, produce the higher quality result. And no, I don't mean having the server send the image. That defeats the whole purpose. I want the "scene" to be something that can be sent in the smallest form possible, so that it doesn't lag a client too much, but also with minimal "storage" requirements on the user end **and** scriptable effects, lik elighting, weather, etc., which BTW, also don't actually work quite the same in a scanline system as they do in a raytraced one.

You want an example. Try doing any of the following using OpenGL and the **same** size description files:

http://local.wasp.uwa.edu.au/~pbourke/modelling_rendering/scc4/final/

Or even some like the one with pots (3rd down) in it from the prior contest:

http://local.wasp.uwa.edu.au/~pbourke/modelling_rendering/scc3/final/

which was also only 247 bytes. What are you talking about when you say compact? 500k, compressed, which would then need to be unpacked to use?

The point is to minimize the footprint of the whole thing to the point where you can produce stuff like this, maybe for places that might normally use static MXP images (like you can find any..), while reducing the storage space needed on the user end to "as little as possible". Even an MXP mud would, right now, using static images, use megabytes of data to store cached images, with no capacity to adjust lighting, etc. in them, and 3D systems that can take up **gigabytes**, just for the data, no matter how *compact* you make that data. I just don't see, with the current state of OpenGL and DirectX card, how you manage that, without *still* wasting the time to "trace" the object anyway and produce the card usable data needed, then end up with an inferior result from it, do to the limitation of the method. If you are going to need to waste time producing the mesh from the data, why not just not bother and do it in software in the first place, but place limits on what can be used. For example, adding in some ambient lighting and removing the global illumination (radiosity) from the pots example makes it look less nice, but it also changes the render time from minutes to seconds, since it doesn't have to pre-calc the lighting. Radiosity - probably not something I would allow in a "first pass", or maybe even in a longer one, and especially not if the image isn't lit using static parameters (i.e., no time of day effects, sun position, etc.)

Some things like that would have to be intergrated into the render system, so that the game server could, as part of the script, tell it specifics about the world, including if it was the moon(s) lighting it, what phase they where in, if it was overcast, etc. Macros in the script wouldn't do it, since that's even more "script" you would need to send, and you would need to change it every time the game had more suns, moons, or whatever for the world.

Look, its a complicated set of requirements. Much of it cards can't handle because either the data needed to be front loaded, or generated on the player end, is large, or the means to produce certain effects are just completely absent. But, existing implimentations don't really help much either, since they lack inbuilt support for some key things to make it work well, and any solution you could use with them would mean generating more data some place, then parsing more stuff before you finally get the image started at all. And the OpenGL/DirectX methods just make some of the problems either much bigger or insurmountable, depending on what "feature" you are trying to do.

Lets just put it this way. From a practical standpoint, to do what I would like, I doubt OpenGL would produce better than 5 frames a minute either, which POV-Ray, if it had object persistence between animation frames and didn't need to reparse (coming in version 2.0 supposedly), could do better than that. Why? Bacause by the time you turn up all the settings in some game like EQ2 to where they match the "expected" results the game was designed to predict and run optimally with, the card is spending some much time trying to cheat its way around the limitations of scanline to get that result that straight mathematical models using ray path algorythms is **actually** faster. And again, I don't need 512MB to store a 247 byte file in memory, never mind gigabytes to store textures and data I am not using. I certainly don't need 1-2GB of system memory to "build" the data from the description file, so that the card has something to do. I would kind of like to avoid that if possible. lol
[Go to top] top

Posted by Shaun Biggs   USA  (644 posts)  [Biography] bio
Date Reply #7 on Sun 04 Mar 2007 10:44 PM (UTC)

Amended on Sun 04 Mar 2007 10:46 PM (UTC) by Shaun Biggs

Message
Without attempting to get into any arguments here, all I can say is that these remind me of the old demo completitions in the early 90s with animations. Future Crew and some other groups put together a lot of great demos.

I don't know terribly much about graphics rendering at all, so I'm a little confused here. I thought all the shortcuts that 3d games had to take were because of the unpredictability of the system. You can't tell if someone is going to continue moving in a straight line, or start attacking, or if they will suddenly change clothes. The only time I've seen someone play around with 3d graphics was a friend of mine who was doing a project for school a few years back, and a 20 minute video he made took roughly a day to render (he said he could take shortcuts and knock the time down a lot from a day). I know that we don't have actual lighting or reflections in games because of the time it would take to render them. Please let me know where I'm lost here.

I'm also kind of curious, since I don't have POVRay. How long do these files take to render? I know the limit is 5 days according to the rules for the contest, but I'm assuming they take only a small fraction of that time.

It is much easier to fight for one's ideals than to live up to them.
[Go to top] top

Posted by Shaun Biggs   USA  (644 posts)  [Biography] bio
Date Reply #8 on Sun 04 Mar 2007 11:41 PM (UTC)

Amended on Mon 05 Mar 2007 01:33 AM (UTC) by Shaun Biggs

Message
Wow, POVRay is a Debian package :) time for the rjjaij.pov file (3rd one down that you mentioned), on a sempron 2800+ with 512MB of ram and an Asus EN6200LE:

#biggs@4[~]$ povray scene.ini -Irjjaij.pov -Orjjaij.tga
... blah blah blah ...
Time For Trace:    0 hours  2 minutes  21.0 seconds (141 seconds)
    Total Time:    0 hours  2 minutes  21.0 seconds (141 seconds)

my favourite one from year 4 was the second place winner... wow does that one take a while. I went out to dinner while it was running, so I wasn't chewing up processor time with anything else at all. POVRay used an average of roughly 99.2% of processing power, and the maximum RAM usage is only 1.1%. I am using this to justify buying a better processor with my tax refund.

Time For Photon:   0 hours  0 minutes  19.0 seconds (19 seconds)
Time For Trace:    2 hours 29 minutes  53.0 seconds (8993 seconds)
    Total Time:    2 hours 30 minutes  12.0 seconds (9012 seconds)

Guild Wars fps: 40ish, but nowhere near as pretty as the Persistance of Vision. average 90% of processor power, and pretty much all of my RAM.

It is much easier to fight for one's ideals than to live up to them.
[Go to top] top

Posted by David Haley   USA  (3,881 posts)  [Biography] bio
Date Reply #9 on Mon 05 Mar 2007 01:23 AM (UTC)
Message
Quote:
Look. There are basic facts about scanline and raytrace that make them non-interchangable.
Argh! Thank you for stating the obvious. I know the difference quite well and I never said they were interchangeable. This is massively frustrating because I am fishing for something in total darkness, *still* having little idea what exactly you are doing. I take guesses, stabs at things but then you come back telling me how little I know, just because I happened to guess incorrectly about what you were doing, and all the time you fail to tell me what exactly you are trying to do. It is incredibly frustrating.

Quote:
What are you talking about when you say compact? 500k, compressed, which would then need to be unpacked to use?
If you noticed, those files are a programming language of sorts. It's a remarkably compact one, but sure, you could probably reproduce it in OpenGL. I don't feel like doing it because it would take a really long time to build a system capable of unpacking all that, but what makes you say you can't use exactly the same file format in OpenGL?

OpenGL is just something that puts stuff on the screen. If you wanted to, you could render your entire scene pixel by pixel. Of course, that would be really slow and defeat the point, but with that in mind, you could in fact re-implement a ray-tracer if you wanted to.



About the game, though, you seemed to suggest later on that you want to store the procedural descriptions on a MUD server, and send them over to the client who will then take care of rendering them. In other words, send an image over, described not in pixels but in procedures.
Well, as I said above, there's no reason you can't do that in OpenGL. You seem to object to needing a whole system to draw the picture, but you need the same thing if you're doing ray-tracing.

What are the technical reasons are for why you think you cannot specify a scene procedurally for OpenGL?

Of course, you would have to go write code to "execute" the procedures and translate them to something OpenGL can deal with (even if it's just points on the screen) but nonetheless it is still possible.
Perhaps what you meant by "impossible" was "I don't want to do it that way", which is a valid problem (I wouldn't really want to either, unless I really had to), but that would be a completely different statement!


Quote:
I just don't see, with the current state of OpenGL and DirectX card, how you manage that, without *still* wasting the time to "trace" the object anyway and produce the card usable data needed, then end up with an inferior result from it, do to the limitation of the method


Who says it's going to be inferior? OpenGL can be used, if you'd like, as a tool to just put pixels on the screen. So you can do a lot of your logic in software, and then just put the pixels on the screen.

Quote:
If you are going to need to waste time producing the mesh from the data, why not just not bother and do it in software in the first place

Well, that's an interesting question, but as I said above this is not a technical impossibility, it's just a question of having to translate a procedural description into OpenGL.

Quote:
Some things like that would have to be intergrated into the render system, so that the game server could, as part of the script, tell it specifics about the world, including if it was the moon(s) lighting it, what phase they where in, if it was overcast, etc

I thought we didn't care about real-time. OK, this isn't second-by-second, but suddenly we need to care about rendering the same thing relatively often as things change. Maybe it's not such a good idea to spend minutes re-rendering the image every time the weather changes...

Quote:
I certainly don't need 1-2GB of system memory to "build" the data from the description file, so that the card has something to do.
If you're going to make clear, empirical claims like this, I would like to see some kind of data point. For example, some kind of proof that it would actually require that much.

Have you ever heard of procedural terrain generation for OpenGL? It's quite cheap in terms of memory, and produces nifty results, all in a very compact form. And no, you don't need 1-2GB of memory to convert the procedure into triangles.



What I'm really having trouble with is that the topic of discussion has not been made clear. I'm not sure what exactly we're talking about. You said that it's a technical impossibility to use OpenGL, but that's not true, and you've alluded to as much yourself. You were saying that there is no software available to do what you want, but it looks like POVray does just that (and a lot more, it seems). As I said above I'm really not sure what we're talking about and it's getting kind of tiring to take stabs in the dark. :-)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
[Go to top] top

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Reply #10 on Mon 05 Mar 2007 08:02 PM (UTC)

Amended on Mon 05 Mar 2007 08:15 PM (UTC) by Shadowfyr

Message
Ok.. Lets try again..

OpenGL and the use a technique that relies on two things 1) meshes, because it cannot produce a mathematically accurate model of objects, pre-made objects, because its pretty much impossible to do something as simple as:

difference{sphere <0,0,0>, 2
box <0,0,0> <4,4,4>}


Assuming that code does what I want. lol Don't feel like double checking my numbers. It should clip 1/4 of the sphere away. To do that, which is called CSG, raytracing uses collision detection to determine when the clipping object is struck, colors the point where it contacts that surface, then continues on until it escapes the bounding area. Bounding areas are sort of quick checks, which tell the engine that there are a bunch of objects without some region and it needs to start looking more carefully for the surface, or that it can stop doing so.

2. Some sort of depth tree to determine what to bother rendering at all, based on what is visible.

The problem with the former is that, if you want to clip a complex object or set of objects, you *must* predefine them, since in order to "get" the mesh needed to produce the result you would have to use raytracing to figure out where the edges of the clipped area was in the first place. Instead, games, and even demos, rely on the editing software to make those calculations and export a complete mesh object, which correctly approximates the result. Could you do it with OpenGL alone? Of course, but it takes extra time, and even a lot of editors when they provide OpenGL previews don't bother clipping the objects, they just show the clipped area in a different color, since its easier than spending the 3-4 minutes needed to calculate every instance of clipping in a detailed scene, so it looks right. The ones that don't do that instead clip as you create the object, so that the time spent making the calculations is handled immediately, then only store the mesh that results. This is faster, but only when *designing*, since if you are taking a raw file with only primitives in it, the software has to calculate all the clippings *before* you get the data needed to actually do the OpenGL.

In other words, if you do it that way, you are going to be spending part of your time doing a real raytrace "before" OpenGL ever handles the actual rendering of the final result.

Now, issue #2 is even bigger of and issue. It effects refraction, reflection and lighting. Refraction can deflect a ray towards an object that **isn't** visible. Reflection can actually reflect an object that is 180 degree behind the camera direction. Lighting can all come from a source that is only from behind. OpenGL, which relies on the graphics card, is going to have to either a) live with not correctly reflecting things, not correctly lighting them, and not correctly refracting the light *or* it has to use a second camera to "see" what is supposed to be reflected, then project that onto the surface of the object doing the reflection. This is a real problem when you mix Refraction and Reflection, or where different objects have completely different reflective properties and the more objects that reflect you have, the more time is spent "precalculating" the result. Raytracing simply bounces the light off of each object, until it either runs out of objects or you hit a trace level limit (which is designed to prevent it getting trapped between two mirrors and trying to calculate *forever*.

Oh, and then there are shadows. There are a reason why OpenGL and DirectX use "shadow maps". Why? Because they can't calculate shadows. They only look at things from the perspective of what is *immediately* in front of the viewer and in that direction, so shadows, which are caused by the direction of the "light", can't be calculated, since the scanline system isn't based on the directionality of the light, only the visibility of the objects. Some things they have added some limited hacks for, but they produce inferior results, since they are just that, hacks used to try to approximate what should happen, not physical models. For example, you could have the card, maybe, caculate a shadow map itself, for a single light source. It can't do it for any source "other" than that light, or if it does, it has to do so for each individually. It can't actually calculate them for "every" source at one time and produce an accurate map that is correct for all of the sources. Worse, shadow maps don't interact with objects the way a real shadow would. In EQ shadows literally got cast "through" walls, because the algorythms didn't know to "bend" them to fit the wall and treat that wall as a point to "stop" projecting the shadow. Fixing that is obviously possible, but you still get false shadows, which do not accurately reflect all light sources or how everything in a room interacts.

I suggest you read up on just what scanline actually does and what it can't do. These are just the most obvious examples and they require a lot of "pre-done" work to correct. They can't be calculated on the fly from a description file or script. They must be done *before hand* to get even close to the same results as raytracing. And even then, some, like mixing multiple mirrored reflections and refraction, are virtually impossible, without doing 90% of the work to make them happen "before" the card ever gets its hands on the data.

Oh, and Shaun, the thing consuming all the time there in the second place winner is this "photons { count 1e6 }", when non-adaptive. Earlier methods uses faked caustics to get the light scattering effects through crystals, which can produce things like the rainbow effect from the prism. That could take even more time, or less, depending on the object. Photons are a means of precalculating a mass of data to figure out just how the light *actually* gets effected by a prism. Interestingly, as you an see below, it takes less time to do photons than to do the normal trace, even though part of the time is spent "precalcing" the data, though only using "adaptive" lighting. That might change a lot with more than one lighting source producing the photons too though.

On my system BTW, that image takes:

5 minutes 31 seconds - As is.
1 minute 58 seconds - With "Adaptive" uncommented.
4 minutes 50 seconds - With photons removed.
[Go to top] top

Posted by David Haley   USA  (3,881 posts)  [Biography] bio
Date Reply #11 on Mon 05 Mar 2007 09:26 PM (UTC)
Message
Quote:
because its pretty much impossible to do something as simple as:

difference{sphere <0,0,0>, 2
box <0,0,0> <4,4,4>}


There's no reason you can't do this as part of your own interpreted language rendered in OpenGL.

Classes here at Stanford have students implement their own ray tracers in OpenGL. No problem. Don't know what your issue is, because you haven't yet given actual technical reasons other than that OpenGL doesn't natively and directly support your specification language.

I'll have you note that your spec language isn't exactly inherent to ray tracing, either.

Quote:
2. Some sort of depth tree to determine what to bother rendering at all, based on what is visible.

No. OpenGL natively only does z-coordinate depth checking. If you want a depth tree you write it yourself. It is not always appropriate.

Quote:
The problem with the former is that, if you want to clip a complex object or set of objects, you *must* predefine them, since in order to "get" the mesh needed to produce the result you would have to use raytracing to figure out where the edges of the clipped area was in the first place.

No. You can determine the intersection of mathematically defined shapes without using ray tracing.

Quote:
Instead, games, and even demos, rely on the editing software to make those calculations and export a complete mesh object, which correctly approximates the result. Could you do it with OpenGL alone? Of course, but it takes extra time,

Yes, which is why it's usually pre-computed. The intersection doesn't change, so it's much more efficient time-wise to compute it once and be done with it.

Quote:
Now, issue #2 is even bigger of and issue. It effects refraction, reflection and lighting. [...]

You do realize that there are plenty of OpenGL libraries that do precisely this, right? Of course it's hard, but then again, there's no such thing as a free lunch, and it's also not exactly easy to implement a ray tracer!

Quote:
Raytracing simply bounces the light off of each object, until it either runs out of objects or you hit a trace level limit (which is designed to prevent it getting trapped between two mirrors and trying to calculate *forever*.

I like how you use "simply". Of course it sounds so simple to just bounce the light off.

The problem is that you are compounding the theory of ray-tracing with the implementation of a graphics library. Everything you have talked about is perfectly feasible using the OpenGL library. As I have said previously, OpenGL is, fundamentally, a library for putting pixels on the screen.

Quote:
Oh, and then there are shadows. [...] Some things they have added some limited hacks for, but they produce inferior results, since they are just that, hacks used to try to approximate what should happen, not physical models.

I think you are confusing different issues. OpenGL is typically used for real-time rendering in which it is too costly to do all these things you want. So approximations are made, but this is not due to limitations of OpenGL but due to limitation on the amount of time allowed for calculation.

I mean, you're comparing apples and oranges: you're comparing a ray-trace render that can take minutes or longer to an engine that uses OpenGL to render at more than 60 fps.


Quote:
In EQ shadows literally got cast "through" walls, because the algorythms didn't know to "bend" them to fit the wall and treat that wall as a point to "stop" projecting the shadow. Fixing that is obviously possible, but you still get false shadows, which do not accurately reflect all light sources or how everything in a room interacts.
This isn't a limitation of OpenGL, it's a limitation of the EQ code. If you admit it can be fixed, it boggles the mind why you think it's a limitation of OpenGL.

Besides, the false shadows are again due to approximations. EQ (and all 3d games) has completely different operating requirements from you: they require real-time graphics, you are happy to render less than one frame per minute.

It's a completely inappropriate comparison.

Quote:
I suggest you read up on just what scanline actually does and what it can't do. These are just the most obvious examples and they require a lot of "pre-done" work to correct.

I'll pretend that I haven't heard your continued insinuations that I have no idea what I'm talking about despite continually showing why what you state is not a technical limitation of OpenGL but rather how one chooses to use it.

Quote:
They can't be calculated on the fly from a description file or script. They must be done *before hand* to get even close to the same results as raytracing. And even then, some, like mixing multiple mirrored reflections and refraction, are virtually impossible, without doing 90% of the work to make them happen "before" the card ever gets its hands on the data.

This is mind-boggling -- you are pretending that somehow ray-tracing works "on the fly" despite taking minutes or hours to render a scene!

This whole line of comparison is simply not appropriate, for all the reasons I have stated above.
It is completely improper to compare accuracy of a real-time rendering system to the accuracy of a system that allows itself minutes to render a single scene.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
[Go to top] top

Posted by Nick Gammon   Australia  (21,322 posts)  [Biography] bio   Forum Administrator
Date Reply #12 on Tue 06 Mar 2007 03:55 AM (UTC)
Message
Quote:

May the forces of evil become lost and confused on the way to your house.


I like the signature!

- Nick Gammon

www.gammon.com.au, www.mushclient.com
[Go to top] top

Posted by Shaun Biggs   USA  (644 posts)  [Biography] bio
Date Reply #13 on Tue 06 Mar 2007 05:35 AM (UTC)
Message
Quote:
On my system BTW, that image takes:

5 minutes 31 seconds - As is.
1 minute 58 seconds - With "Adaptive" uncommented.
4 minutes 50 seconds - With photons removed.

Now, I will admit that I don't have the greatest computer out there... far from it. But what hardware are you running that this is only taking 7% of the time that it took me to run? I definatly need some upgrades here.

It is much easier to fight for one's ideals than to live up to them.
[Go to top] top

Posted by Shadowfyr   USA  (1,783 posts)  [Biography] bio
Date Reply #14 on Tue 06 Mar 2007 06:50 PM (UTC)
Message
David.. If you simplify the requirements for OpenGL to the point where all it does is "put pixels on the screen", then of course none of this is an issue. All your doing then is puting the final result on the screen using OpenGL. OpenGL is supposed to support graphics card functions that do real 3D effects though. There are technical limitations that prevent graphics cards from doing anything except approximating raytracing. And z coordinate depth blah blah blah, doesn't solve the problem. Its still only able to handle what is **visible**, not things that are "in the scene", but not "in front of the camera". That there are some hacks in OpenGL to provide mapped images that mimic reflections doesn't change the fact that the hardware can't do them correctly "period". Your missing the damn point with shadows too. That they fixed the wall issue isn't relevant, since they still can't change the fact that shadow maps are not accurate representations of the way shadows form, no matter how they are produced, walls or not. Close, maybe, but that's not good enough and it takes more memory to hold them than just figuring out if a pixel "is" in shadow through raytracing.

Oh, and one clear problem.. Raytracing with most applications is 32-bit or even 64-bit on ones that support that. While the bandwidth for the cards is 32-bit or 64-bit, etc., that is **only** the bandwidth for transfering data to the card itself. The actually internals, from what I understand, are still basically 16-bit math, which means that the precision that they calculate with is significantly worse than the software. Some people tried to suggest changing POV-Ray to take advantage of some newer cards inbuild math libraries, to speed things up, but that can't work, since the precision of the values changes between the hardware and the software. I.e. POV-Ray using 32-bit or 64-bit values, while the cards use 16-bit.

And that brings up another issue. I don't care what hacks you use or what you do to try to get around the hardware limitations, you are still stuck with less accurate results, because the hardware for 3D can't do the math with sufficient precision to produce the same result. It is comparing Apples and Oranges, but I am saying they are not even remotely interchangable. You are the one insisting they are and that its somehow trivial to fake what I want in OpenGL, compared to doing real raytracing.

As to mapping an objects parameters. Yeah, there are a number of things, like marching cubes, which can "map" the surface. None of them are as accurate as a pure math approach using ray paths. You end up calculating things you will "never" see, just so you can feed the card complete objects that it then hacks down to what you "can" see anyway. And, as I have tried to point out, if part of the object isn't "visible" then it won't correctly reflect, cast shadows, even if you have hardware that will calc them for you, and in some cases the algorythms used to determine visibility will even fail to show the visible part of the object. There are serious limitations to how it works and all the "solutions" for things like refraction, reflection, etc. are hacks. They are not mathematically accurate, they can't deal with complex situations with multiple cases of light bending or reflection, etc. They simply can't reproduce the same results. Hell, even high end applications like Maya, while they use some cheats for speed, and do use OpenGL previews, **still** usually do their final rendering pass using a raytrace algorythm. Others use other methods that approximate, since they are more interested in speed than realism. If 3D cards where so great, wouldn't you thing they would be using those exclusively?

Some things from the POV-Ray FAQ that are pertinent:

"3DNow uses single precision numbers while POV-Ray needs (yes, it needs) double precision numbers. Single precision is not enough (this has been tested in practice)."

This is true of pretty much anything, including OpenGL, when using the hardware's math system.

"3D-cards are not designed for raytracing. They read polygon meshes and then scanline-render them. Scanline rendering has very little, if anything, to do with raytracing. 3D-cards can't calculate typical features of raytracing as reflections etc. The algorithms used in 3D-cards have nothing to do with raytracing.

This means that you can't use a 3D-card to speed up raytracing (even if you wanted to do so). Raytracing makes lots of float number calculations, and this is very FPU-consuming. You will get much more speed with a very fast FPU than a 3D-card.

What raytracing does is actually this: Calculate 1 pixel color and (optionally) put it on the screen. You will get little benefit from a fast videocard since only individual pixels are drawn on screen."

While some of this is less true now, its still mostly true, since you still have to do a lot of footwork before hand to mimic the same results, or your are limited to like, one reflective surface, which can't reflect what is reflected in another nearby surface, etc.

This page is also clear on the difference:

http://www.acm.org/tog/resources/RTNews/demos/rtrt_faq.txt

You sacrifice realism and detail for speed. *period* If you don't need the speed, why the heck sacrifice the detail, just so you can use a card?

But, I don't know the full technical details, other than ones I already mentioned. You want a better explanation for why its not possible to interchange this stuff, talk to the POV-Ray developers. As for myself, its not any less trivial to impliment a decent scanline based system that **must** have a high end card to work right, than writing the purely math based raytracing system. And, the non-software based ones are only extendable when the cards improve, which is pretty damn meaningless in some cases, since why the hell would I want to include something like focal blur, if it forced everyone using my program to have Vista? (Its a DirectX 10 only feature on new cards.) I can do that fine in POV-Ray, and its unlikely to be horribly difficult to manage in any other software based system.

I don't know, maybe some hybred system would work faster and thus a bit better. But pure graphics card systems... No thanks. And frankly, I have had bad experience with OpenGL on my systems. Its also card hardware dependent and often the cards do not correctly support some OpenGL features, even though they say they do, causing OpenGL to use them when they are broken, or just don't support the features at all. You are still chasing the limitations of the hardware, not the algorythms, and it seems like card makers sometimes don't give a @#$$@# if OpenGL works right with them, as long as DirectX does. Not a good way to make something that you want available to the widest possible set of users, unless all you use it for is to draw the pixels and none of the 3D calculations.

---

And Shaun.
My specs:

AMD 64 3200+ -> 2Ghz (running in 32-bit mode, since 64-bit windows is.. nearly useless.)
2 GB memory

And well. Those are the only ones relevant to running POV-Ray, and the memory is pretty meaningless anyway, since it only becomes an issue with scenes that are ***way*** larger than those in the contest.
[Go to top] top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


30,865 views.

This is page 1, subject is 3 pages long: 1 2  3  [Next page]

It is now over 60 days since the last post. This thread is closed.     [Refresh] Refresh page

Go to topic:           Search the forum


[Go to top] top

Quick links: MUSHclient. MUSHclient help. Forum shortcuts. Posting templates. Lua modules. Lua documentation.

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.

[Home]


Written by Nick Gammon - 5K   profile for Nick Gammon on Stack Exchange, a network of free, community-driven Q&A sites   Marriage equality

Comments to: Gammon Software support
[RH click to get RSS URL] Forum RSS feed ( https://gammon.com.au/rss/forum.xml )

[Best viewed with any browser - 2K]    [Hosted at FutureQuest]