Zhlt 3.0 Beta

245678

Comments

  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    edited October 2004
    QUOTE
    It can be demonstrated that the number of multiplications is exponentially less, describing the number of calculations fewer as approximately .3181n^3.1294.

    What did I miss?


    That's a power function not an exponential function.

    Power function: a variable raised to a constant power. e.g n^3.

    Exponential function: the base is a constant expression and the exponent is a variable, e.g. 3^n.

    All methods O(n^m) where m is a constant and n a variable are said to be solvable in polynomial time as the number of operations is bounded by a polynomial function(either given as a power series or as a single power function).

    QUOTE
    Not all examples of exponents are the same values. It is exponential... see, n to the 3rd versus n to the 2.807. Just because it is solved in exponential time doesn't mean it has to have a specific ratio of your choosing.


    No I don't see, power functions aren't exponential functions, you can choose any other base if you like, but that does not change anything, n^3 and n^2.807 and 0.3181n^3.1294 are all power functions and not exponential functions.

    All exponential functions mutliplied by positive constants and with positive exponents grow faster than all power functions and polynomials in the limit to infinity( e.g. 1.0001^x will grow faster than x^10000 in the limit), which is why they are said to be not solvable in polynomial time, again just look at the power series expansion of e^n. In case you argue that you can choose another base and somehow get out of this dilemma(it looks like such a mess in ascii), m^n = e^(ln(m^n)) = e^(m-log(m^n)/m-log(e))= e^(n/(m-log/(e)).

    (in the first step I used that f(x) = e^ln(f(x)), then I used that ln(fx) = m-log(f(x)) / m-log(e) and then the definition of m-log)

    Since 1/m-log e is just a constant the power series expansion of this is going to have a corresponding power of that constant before each term, which isn't of any concern as it still will be an infinite series of power functions.

    Not that ~O(n^2.87) isn't a big improvement over O(n^3), but neither of them are anywhere near as bad as non-polynomial time solutions.
    Post edited by Unknown User on
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    edited October 2004
    Excuse my grievous error on the interpretation of polynomial time versus exponential time.


    Regardless, while Strassen's algorithm seems only minutely improved, you must consider how many multiplications are done in a map, and how many patches there are. Obviously the number of patches in a map can grow to a huge number, and this results in a very large multiplication problem. A square matrix times a column matrix is normally O(n^2) in multiplication complexity. Therefore, for every patch that has 512 patches visible, it would require 262,144 multiplications to return the result matrix. This is a huge number.

    Unfortunately I know of no known C, C++, or other implementation of either the Strassen or the Coppersmith-Winograd algorithms.

    It also should be noted that Coppersmith-Winograd algorithm is even more efficient, and even with a small number dimension of 2^8 it requires 16,777,216 multiplications normally, only 5,764,801 multiplications by Strassen's algorithm, and only 390,625 multiplications by the figure given by Coppersmith-Winograd.

    That is a reduction of 65% from 'Normal' to Strassen's, a reduction of 97% from 'Normal' to the Coppersmith-Winograd figure, and a reduction of 93% from Strassen's algorithm to the Coppersmith-Winograd algorithm.


    Now, I shall look for a way to incorporate Strassen's algorithm... and hopefully locate the Coppersmith-Winograd algorithm.

    Then I will be able to present figures in the manner that they would be utilized in the program.


    Edit: The numbers for the Coppersmith-Winograd are an estimate, I do not know the 'baseline' they used, nor the requisite dimension size. I simply used 2x2, this resulted in smaller multiplication counts.
    Post edited by Unknown User on
  • amckernamckern Join Date: 2003-03-03 Member: 14249Members, Constellation Posts: 970 Fully active user
    thanks for writing functions guys

    i dont understand theory of code very well

    amckern

    No Frils!
  • ReveReve Join Date: 2003-09-23 Member: 21142Members Posts: 64
    This is bringing back memories of my "Foundations of Computer Science" classes from last year wow.gif

    Reve
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    QUOTE
    Excuse my grievous error on the interpretation of polynomial time versus exponential time.


    Of course, when ever issues like this arise it's best to talk it out until you either understand that you are wrong and why or that the other person is and explain why.

    Multiplying isn't the only performance issue here, floating point addition isn't much faster. Not having things in cache is an important issue if you really want to get up to speed, I'm not really sure how you would ensure that things stay in cache as best possible but doing things in an order where the same terms are used so often that they rarely needs to be read from system memory seems usefull.

    If you want a low level timer for comparing performance of different methods or doing performance profiling you can use the QueryPerformanceCounter function defined in windows.h.

    e.g.

    LARGE_INTEGER OldPerformanceCount;
    LARGE_INTEGER NewPerformanceCount;
    LARGE_INTEGER PerfFrequency;

    //get frequency of the hardware timer
    QueryPerformanceFrequency(&PerfFrequency);

    QueryPerformanceCounter( &OldPerformanceCount );

    //do stuff, call some functions whatever.

    QueryPerformanceCounter( &NewPerformanceCount );

    double DeltaT = (double)(NewPerformanceCount.QuadPart - OldPerformanceCount.QuadPart) / ((double)PerfFrequency.QuadPart);

    I wouldn't bother trying to optimize matrix multiplication just yet. If there hasn't been an accurate performance profile done for HLRAD(and you don't seem to trust psearle on this matter) you don't know for a fact if it will help or not, sure you can still do it as a fun/educational project. Finding out if it is a significant bottle neck to performance of HLRAD would be the first task if educational purposes isn´t your motivation. Bounce steps are certainly very quick when the matrix is used, they take a mere procent or so of the total time.
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    Hm.

    Why does the vismatrix take so long, what steps are used in its creation?
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    As I understand it, finding how well patches see each other. I.e. if patch i has a certain radiocity, how much does this light up patch j?

    I think it involves First checking to see if there could be any possible line of sight. The PVS(Potentially Visible Set) which is constructed when running HLVIS could be used to quickly rule out if two patches see each other or not.

    If the patches can potentially see each other some trickier method has to be used to determine if they do, a wild guess is that they use the same method used for bullets in half-life, finding if any surfaces in the clip hull reserved for points intersects a 'vector' from one patch to another(players and bullets use clip hulls, this allows you to reduce collisions of players down to point collisions with 'walls' in the precomputed clip hull instead of the checking collisions against a bounding box(or cylinder/elipsoid or even more accurate shape)). Assume patches are pointlights, power density decreasing as 1/distance^2, use this and lamberts law togheter with some properties of the patches(inherited from the surface they are on, as it should have a higher albedo for a a 'whiter' texture) to determine the components of the matrix.

    This is a O(n^2) alghoritm, but for sane n I think the linear component is more important. If you have a map twice as large as another one this typically does not mean that you have twice as many patches and that the average number of visible patches from a patch is twice as large! If you have twice the number of patches but the same average of visible patches from a patch then you would only have twice as many non-zero elements in the vis-matrix, and they would be the ones that take the most time to compute(I think), using the PVS should reasonably fast(for large n the n^2 term will outgrow the n term).
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    edited October 2004
    Clip hull generation is so quick relative to radiosity that I would love to see cylindrical player hulls used to generate them. Since rectangular axis parallell blocks are used in their generation this means that the player has a sqrt(2) times as large collision cross section in some directions.

    e.g. in NS a square hole in the floor has to have sides longer than 36*sqrt(2) = 51 units for the player to be able to fall through it if it is rotated so that it's diagonals are parallell with the x and y coordinate axes.

    Is there any reason this would not work? with square player hulls non axis parallell geometry can be a real **** some times...

    edit: preserved for hillarity "in NS a hole in the square hole in the floor".
  • prsearleprsearle Join Date: 2002-11-01 Member: 2365Members, Constellation Posts: 338
    edited October 2004
    QUOTE (Soylent green @ Oct 2 2004, 11:29 PM)
    As I understand it, finding how well patches see each other. I.e. if patch i has a certain radiocity, how much does this light up patch j?

    I think it involves First checking to see if there could be any possible line of sight. The PVS(Potentially Visible Set) which is constructed when running HLVIS could be used to quickly rule out if two patches see each other or not.

    If the patches can potentially see each other some trickier method has to be used to determine if they do, a wild guess is that they use the same method used for bullets in half-life, finding if any surfaces in the clip hull reserved for points intersects a 'vector' from one patch to another(players and bullets use clip hulls, this allows you to reduce collisions of players down to point collisions with 'walls' in the precomputed clip hull instead of the checking collisions against a bounding box(or cylinder/elipsoid or even more accurate shape)). Assume patches are pointlights, power density decreasing as 1/distance^2, use this and lamberts law togheter with some properties of the patches(inherited from the surface they are on, as it should have a higher albedo for a a 'whiter' texture) to determine the components of the matrix.

    This is a O(n^2) alghoritm, but for sane n I think the linear component is more important. If you have a map twice as large as another one this typically does not mean that you have twice as many patches and that the average number of visible patches from a patch is twice as large! If you have twice the number of patches but the same average of visible patches from a patch then you would only have twice as many non-zero elements in the vis-matrix, and they would be the ones that take the most time to compute(I think), using the PVS should reasonably fast(for large n the n^2 term will outgrow the n term).

    That's almost how rad does it. The vismatrix is used to speed up visibility testing so patches that can't potentially see each other don't need to be considered when calculating how much light is transfered. Vismatrix creation is reasonably efficient (best case for deciding whether a vismatrix bit should be set is one cmparison to the hlvis pvs; worst case is the pvs test, a dotproduct and a raytrace, plus raytraces against all opaque entity faces). The need to trace against all opaque entity faces is the reason rad runs so slowly when the vismatrix is disabled: the vismatrix isn't used, so all these visibility tests have to be performed every time a patches visibility is queryed. The actual light transfers (represented as a matrix in the valve-erc article linked above) are actually stored in compressed form and their calculation takes the bulk of the time spent in rad. Each transfer requires that two patches visibility be determined and, if they are potentially visible, their distance and relative angles need to be used to calculate the transfer value. Once all this is done, the energy can be bounced from the emitting faces; this is reasonably quick (8 bounces take only a little longer than 2).
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    So really, the most efficient way to speed up RAD would be to use a better algorithm for finding % visibility...


    Be difficult to find one more simple than raytracing and more efficient. I'm sure there's been a study on it somewhere.
  • ReveReve Join Date: 2003-09-23 Member: 21142Members Posts: 64
    ZHLT 3 documentation online

    I'd like to announce the launch of the ZHLT documentation online, www.zhlt.info

    At the moment, it currently consists of just the documentation I did for XP-Cagey's p14 build, updated for p15, p15.5, and zhlt 3 betas 1 - 3. I will be keeping it updated with changes to ZHLT 3, and plan to add other features (such as links to the different versions of the tools, and links to particularly helpful tutorials). I also welcome any suggestions from the community with regards to the site.
  • MendaspMendasp I touch maps in inappropriate places Valencia, Spain Join Date: 2002-07-05 Member: 884Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Gold, NS2 Map Tester, Reinforced - Shadow, WC 2013 - Shadow, Community Dev Team Posts: 4,175 Fully active user
    Any update on the tools? I mean, a working RAD would be nice wink-fix.gif
  • TequilaTequila Join Date: 2003-08-13 Member: 19660Members Posts: 3,022
    QUOTE (Mendasp @ Oct 5 2004, 11:51 PM)
    Any update on the tools? I mean, a working RAD would be nice wink-fix.gif

    I'll drink to that.
  • amckernamckern Join Date: 2003-03-03 Member: 14249Members, Constellation Posts: 970 Fully active user
    yeah yeah, hold on

    I dont like bugs in complies, and there is a hold list of them

    I also got to work out why csg tells me off when i use complex brush work




    Please use the ZHLT p15 rad, at the moment

    Though i am goin to use merls code, and re add the extra light data, and see if that fixes the bugs




    I am bussy with 4 projects at the moment

    ZHLT 3.0
    SapphireScar
    Real Life
    TAFE Exams and final term projects (Part of real life)

    No Frils!
  • FragBait0FragBait0 Join Date: 2003-12-16 Member: 24431Members Posts: 58
    Actually, p13 RAD is the one to use really. It runs at full speed and dosen't wreck entity shadows. smile-fix.gif
    Im using p13 RAD alongside the beta 2 tools and all is fine.
    I suppose p13 does do some weird self-shadowing on opaque entities but its better than abstract shapes on the roof tounge.gif

    The thing about using round player clipping hulls is that you are probably going to need a lot more clipnodes and second that the hulls could end up with a lot more bugs. Some would say they already are buggy as hell but i'm yet to find a case that isnt fixed with some nicer brushwork (and/or smart use of the BEVEL texture smile-fix.gif). *shrug
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    edited October 2004
    (Excuse my poor terminology.)

    You wouldn't need to end up with many more clip nodes using cylinder hulls.


    edit: Oh no it borked my quotes

    fix : Dots are solid space
    e is empty space
    QUOTE

    top down:

          ^
          |


    ...........|
    ...........| --->
    ...........|


    'Outer corner' with normals shown. You would get a cylindrical region around the corner where the player cannot move if we did an exact solution, but we can just extend the planes parallell to the wall to the point where they intersect and forget all about the cylindrical region, or we could replace the cylindrical region with a bevel.

    QUOTE

    top down:

    eeeeeeeee|.....
    eee^e<---|.....
    eee|eeeee|.....

    .....
    .....................


    'Inner corner' with normals. This would not be a problematic case.
  • amckernamckern Join Date: 2003-03-03 Member: 14249Members, Constellation Posts: 970 Fully active user
    next time, use [code

    it formats it better then quote
    No Frils!
  • FragBait0FragBait0 Join Date: 2003-12-16 Member: 24431Members Posts: 58
    QUOTE (Soylent green @ Oct 6 2004, 02:15 PM)
    You wouldn't need to end up with many more clip nodes using cylinder hulls.

    Well....you code it then.
  • amckernamckern Join Date: 2003-03-03 Member: 14249Members, Constellation Posts: 970 Fully active user
    I have just updated the 1st post, to reflect this thread being more then a temp thread, for people to download the tools

    Also, i know ASCII is hard to draw, i was just saying drawing with the code tag, over quote tag, formats the ASCII drawing better
    No Frils!
  • ReveReve Join Date: 2003-09-23 Member: 21142Members Posts: 64
    amckern, could you email me (usual address) each time you make a release? Updating the site for each release shouldn't take more than a minute or two, so I'll get it done rather quickly.

    Reading the changelog for the unreleased beta4 on your site, it mentions a fix for entity based shadows - is that the second 'critical problem' that I listed on the front page of the docs site? is it completely fixed? sounds like zhlt 3 is nearer than we thought smile-fix.gif

    Finally, a few questions and thoughts:

    What exactly is that code from nvidia, will it cause copyright/distribution problems, can an alternative be written?

    Also, are you trying to convert the entire tool set to assembler, or just the key functions that need speeding up (for example that function in hlvis that was slightly sped up by egir in beta1). Converting the whole lot to assember, from experience in open source projects involving assembler (v2os, unununium) is likely to heavily cut down the number of people contributing to the tools, and the ease of maintaining them. Individual functions may however possibly give good results.

    Finally, I've just noticed I've left you off the list of maintainers at the top of the ZHLT credits page (though you are in the main list). I'll add you in tonight smile-fix.gif

    Reve
  • amckernamckern Join Date: 2003-03-03 Member: 14249Members, Constellation Posts: 970 Fully active user
    edited October 2004
    yeah Beta 4 is ALMOST done

    CSG was working well, on all the test maps i was sent, and then one of the guys that downloaed it, told me csg was still broken

    AJenbo, m8, can you please look at your code, and see if the -16 change broke the tools?

    also aj, for the lattest source add a 4, insted of 2, to the link i sent ua

    I'll also let you know m8, boy u keep an eye on my site like a chicken hawk on a pen

    In realtion to the NVidia code, its bascily a free peice of source code in assemley, that they had on there site

    Part of the rad opts, uses part of that code - look at the dev tools page, to find the code if you want to check it out
    No Frils!
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    edited October 2004
    QUOTE
    Well....you code it then.


    Looking at qcsg in the SDK now, ZHLT will probably be the same basic code extended with bug fixes and stuff so it should be somewhat representable. It may turn out to be as easy as a quick modification to the code that generates the planes to clip against, if I understand roughly how it works.

    edit: qcsg doesn't allow planes with arbitrary normals that aren't parallell to any of the x-y, y-z or z-x planes. *sigh* I'm gonna have to try and change that...
    Post edited by Unknown User on
  • FragBait0FragBait0 Join Date: 2003-12-16 Member: 24431Members Posts: 58
    I shoulda just kept my big mouth shut and fingers still...
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    The unfortunate truth of it is, the engine has far too many constraints on it, to the point where things that have efficient algorithms today cannot be used on the basis that the algorithms in the game are so different and the editor does not properly utilize objects such as concave brushes. Yes, it is possible, it isn't that difficult to envision an engine that simply renders every object as a plane with a series of bounds. But this engine decides to chug away the old fashioned way, instead of checking individual face visibility it checks the visibility of an entire brush, and numerous other things. Obviously if you have a complex map there are faces that will never be seen, and never need to be rendered, thankfully the Null texture resolves most of this conflict, but not all. The result is that the engine and compile tools are delightfully inept at doing complicated procedures.


    Oftentimes I wish I was able to create a single face in the world editor, instead of requiring a convex brush, I could make my entire map out of faces that had one side as a texture and the other null. It would be amazingly efficient because there would be no need for extra calculations, for a 4 vertex rectangular wall the number of requisite faces is 5. One vertex for each of the rectangular wall points, and a 5th to make the entire structure into a pyramid. Only one of those faces need ever be rendered or have light calculations performed on.


    Unfortunately, as is often the case nowadays, business concerns override the urge for developers to improve a game. Thus instead of updating this antiquated software they have deliberately and consistently ignored the compile tools and core engine code, even failed to modify even the simplest pieces of code to increase efficiency, unless it causes a fatal error or disrupts Counter-Strike -- or Condition Zero, more aptly -- gameplay then it is ignored and the developers are told to move on to the next engine, Source. Even now mods are constantly adding new features, such as the Spirit of Half Life modification. Simple features that were never added to Half Life, not out of lack of skill or effort within Valve, but by the very fact that as a business move it makes little sense to update a game that has already been sold to almost all of the people who will ever own it.


    Ok, I'm done ranting! I feel better. Now I get to go read some post on a political discussion forum and I'll be angry all over again.
  • AJenboAJenbo Join Date: 2004-06-14 Member: 29298Members Posts: 30
    edited October 2004
    yep i will be looking in to that and at your side but brobably nothing will hapen until next week as i have a excamen and some animations i need to have finished for this week.

    edit:
    just got all tools except netvis to compile on msvc++ 2005 beta, could some one pack the nedded header files up for me or porst a more direct link i cant sem to nvigate around on http://sourceforge.net/projects/cplusplus/

    hmm would it be any idea to add a new null that didn't cast shadow, because when you make a singal face with null the nulled faces will still cast a shadow?
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    edited October 2004
    I think right now a very cool idea would be to add support for face-based architecture.

    The algorithm would be simple... sort of like with Hint brushes but instead they would be solid textures. It would work like this:

    You generate a brush with a texture on one face, and a null texture on the other. The entire brush consists of two faces, and the two faces must have a normal. When the compile tools find a brush with two faces, it automatically generates a new vertex on the side opposite the texture, that is, the null side. The new vertex is the connection between all the others. The new point, is placed in the center of the vertex, but one point outward from the object. So now instead of being a face, it's a legitimate pyramid. This pyramid should be used for all the calculations, and the new faces should all have the Bevel texture (no clipping) or a new texture that is utterly and completely voided in all the future calculations, including vismatrix generation!

    This would make it so that I can simplify my RMF file by making complex structures into faces, and then the compile tools can change this faces into simple pyramidal brushes.

    Here, let me make an example to show you as it would be interpreted by World Editor and then by your compile tools.

    Brush as represented in hammer:
    user posted image

    Brush as the compile tools would read it (the 'tip' of the pyramid is exaggerated, in reality it would be located optimally one point away from the center of the face, towards the Null/Bevel side.)
    user posted image



    Also, on to Soylent Green's post. It wouldn't be necessary to change the way the engine defines clipping to a cylindrical object. It would only be necessary to change the compile tools so that there would be an indicative option as to whether or not that particular brush has clipping. Then, using Null brushes it would be possible to add clipping in any manner the map creator sees fit. Simpler than either of you forsaw, although it is not automated.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    edited October 2004
    QUOTE
    But this engine decides to chug away the old fashioned way, instead of checking individual face visibility it checks the visibility of an entire brush, and numerous other things.


    This cought my eye. Modern engines try to quickly exclude large bunches of polygons all at once and try to not touch individual polys and even less individual vertices whenever it is possible to avoid it. Today CPU's are so much more slower relative to graphics cards than they used to be that doing things the way HL1 does is a huge no-no in modern games. When half-life was made it made sense to do things like they where, now it is a HORRIBLE idea.

    Half-life uses immediate mode for models and wpolys.

    Here's what half-life does to render w-polys(obtained using the gl_log command and replacing some of the hexadecimal constants to their symbolic constant as found in gl.h)

    CODE
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glEnable( GL_TEXTURE_2D )
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glBegin( GL_POLYGON )
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glEnd
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glEnable( GL_TEXTURE_2D )
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glBegin( GL_POLYGON )
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glEnd
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glEnable( GL_TEXTURE_2D )
    glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
    glBegin( GL_POLYGON )
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glVertex3fv
    glEnd


    You'd be much better of with something like vertex arrays(usefull even with dynamic geometry) or display lists(not usefull with dynamic geometry). Here we see a single call specifying the location of each and every single vertex in the polygon(function calls incur quite a bit of overhead so it would be much better to assemble the geometry into a vertex buffer locally and send it all at once).

    GL_QUADS is not even used for drawing w-poly's when applicable(many w_polys have 4 vertices.), this little tid-bit of information could be precached instead of openGL having to figure out at run-time how many vertices each and every call.

    Model rendering looks like this:

    glBegin( GL_TRIANGLE_FAN )
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glTexCoord2f
    glColor4f( 0.486901,0.515543,0.572825,1 )
    glVertex3f
    glEnd

    At least triangle fans and triangle strips are used to not have to specify as many vertices. Here we have 3 calls for every single vertex and as a mean something close to 1 vertice specified per triangle could be expected( there's quite alot of reusal of vertices here).

    On a radeon9800pro HL1 is utterly and totally and completely CPU bound, resolution, AA, AF, ingame settings, nothing matters to performance if it doesn't also lower CPU usage(like turning of dynamic lights). The reason nvidia cards fare better in HL1 is due to their openGL driver having significantly less overhead(in immediate mode).

    Alot of the modern games aim to lower CPU usage by having low poly enemies and characters(look at the movie of the Unreal 3 tech demo if you can find it, they start with source art that's millions and millions of polygons, create a mesh with a few thousand polygons and generate normal maps(among other things) to 'fake' the detail of the higher poly models. As long as you don't look at the edges which can be quite sharp it will look allmost exactly as the high poly version. Doom 3 and HL2 have also taken this path but to a lesser extent.)

    Games like stalker and farcry take another approach, they need high poly counts and they cheat as much as possible with LOD, if something is a mile away it may as well be a sprite or something instead of 2000 polys of foliage. To get away with the high poly counts they have they must really store all the primitives in a practical way the CPU needs to mess with as little as possible. Preferable storing static geometry directly in graphics card memory and only doing a few calls from the CPU's side of things. It's often much faster to just draw things instead off worrying to much about occlusion. This might not be so if alot of heavy shaders are used, there it's pretty important to reduce overdraw.

    In directX(wgf according to the inq) there's alot of new features that are intended to reduce CPU load.

    QUOTE
    Also, on to Soylent Green's post. It wouldn't be necessary to change the way the engine defines clipping to a cylindrical object. It would only be necessary to change the compile tools so that there would be an indicative option as to whether or not that particular brush has clipping. Then, using Null brushes it would be possible to add clipping in any manner the map creator sees fit. Simpler than either of you forsaw, although it is not automated.


    Who wants to toy with thousands of brushes manually? All I want with cylindrical player hulls is to change the way clip hulls are generated not anything in the engine. There's alot of situations(especially tight places) where you are being constrained to axis parallell geometry, and it is damned annoying. Also surfaces that are outside the world are cropped away, not lit or cared much about, during compile time.
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    edited October 2004
    Please read the last paragraph of my large reply. It pertains to your problem with non-perpendicular planes impeding player movement even though, logically, the hole is large enough.


    I think that with that method, it does not require that Amckern completely redesign the way the clipping hulls are generated, instead, it allows mappers to completely personalize the way those holes would be used.

    Edit: Woops, there I go again not waiting until you edit your post. Heh.



    Well, the thing is, Amckern seems relatively new to the ZHLT thing. I don't think he's ready to code his own advanced algorithm to generate clipping hulls based on the idea that a person can be roughly approximated as cylinder, or even an oval.

    Yes, it would be nice to have the compile tools automatically sort it out for you, but it seems to me the easier method would be to allow for brushes to have a special property that prevents them from having a clipping hull.

    Although unfortunately you wouldn't be able to create a world brush that way, because there are no flags or options that can be set in Hammer that would be read by the compile tools.

    Perhaps using a specific texture on one of the faces would be one way of allowing world brushes to have non-clipping geometry...


    Edit: I also forgot to mention that player model size can differ from game to game, so what may be appropriate for the compile tools in one game may differ greatly from what is used in another.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow Posts: 2,867 Advanced user
    qcsg looks really messy, it haven't all sunk in yet so this post is kinda speculative. I don't think you completely need to change how csg works or anything such or I wouldn't even give it a half-hearted attempt myself. It looks like you only need to do a few carefull changes to how it determines how far to the cliping planes should be from the surfaces. I'm just trying to identify and understand the relevant functions in brush.c before I do much at all. There's alot of poorly commented vector math that takes a fair bit of time and paper to read and comprehend.
  • AnpheusAnpheus Join Date: 2004-09-30 Member: 32013Members Posts: 63
    The problem is let's say you edit it for NS.

    Then anybody who wants to use the compile tools for Counter-Strike, The Specialists, DoD, and other major mods, will have problems. The clipping fields will be specialized for NS.

    It is best to let the map creator decide exactly what clips and what doesnt.


    There are two ways to go about it.... first, you can create a map with the clipping hull you want, and save that file. Then you can compile it again with all the lighting, but load the clipping hulls from the first compile.

    Or you can code the tools to automatically allow you to decide which brushes have clipping or not. This could be anything from a worldspawn key/value with comma-delimiting for every brush that does not possess clipping, or by making it load a text file, etc.
Sign In or Register to comment.