Mapping Troubles

InfinityInfinity And beyond! Join Date: 2002-01-25 Member: 50Members
<div class="IPBDescription">im THIS close dropping my map...</div> what can i do if the makescales part of hlrad just keeps looping the same thing over and over again? its not normal that it makes a swap file 300mb larger after 6 hours choking on makescales only, very weird

Comments

  • OlljOllj our themepark-stalking nightmare Fade Join Date: 2002-12-12 Member: 10696Members
    <a href='http://members.telocity.com/pdebaan/errors.htm' target='_blank'>http://members.telocity.com/pdebaan/errors.htm</a>
    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->
    HLRAD is SLOW/stuck on makescales
    HLRAD requires large amounts of memory to run efficiently for all but the most trivial of maps.

    The vismatrix hlrad needs to run takes exponentially more RAM as the vismatrix grows. The formula is 'number of patches' squared, then divided by 16. This number is how many bytes it will consume. The maximum is 65535 patches, so the maximum vismatrix is 256Mb of RAM.

    Furthermore, the amount of memory the vismatrix uses is not all the memory hlrad needs to run. Depending on the visibility of the map, the 'scales' cache consumes large amounts of memory at once as well. For most maps, this amount of memory is close to 1/2 the size of the vismatrix. This generally equates to a maximum of 128Mb, or a system total of 384Mb to run the worst case (65535 patches) map.

    The makescales phase has a tendency to run fast right up until it runs out of physical memory and has to start relying on the swapfile. This is frequently noticeable as makescales running quickly (say 20 minutes) up until the 90% mark, then taking several hours to finish the last 10%. This is always caused by running out of physical memory, and the last 10% work requiring heavy use of the swapfile. If more architecture is added to the map, one can see that it will start taking exponentially longer to compile, until the RAM is upgraded.

    Besides simply adding large quantities of RAM to the computer, the fix for this problem is identical to fixing a MAX_PATCHES error. Applying those fixes will reduce the number of patches, and cause HLRAD to need less memory, often speeding up the compile dramatically. If all has been done, there is also the option of using -bounce 0 with HLRAD, and using just direct lighting for test compiles of the map. Ideally one would then using a non-zero -bounce for the final compile.
    <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->RAD takes forever!
    Be patient- it does take a long time.
    »Get/free up more RAM (i.e. run in an MS-DOS window with WC closed)
    »Run VIS on fast (not for finished version or large maps!)
    »Run QRad with -chop options
    »GET A FASTER PC!
    QRad gets stuck on MakeScales
    »You need more RAM (preferably 1.5 times as much as the VIS data size<!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->

    Tell me how much and how fast Ram you have...
    If its not enough check this out: <a href='http://rcs.valve-erc.com/' target='_blank'>http://rcs.valve-erc.com/</a>
    The rountine then is simple:
    to improve your lightning just compile 4 * 1/4 of the whole map (or just the changed part)
    the rest of the map should be inside of big solid blocks.

    for a final compile use <a href='http://rcs.valve-erc.com/' target='_blank'>http://rcs.valve-erc.com/</a> or anyone elses PC.
  • quazilinquazilin Join Date: 2002-11-25 Member: 9880Members, Contributor, NS2 Playtester, Squad Five Blue
    I have so bad computer and my comp gets too slow always on the makescales part (it could take days to compile just a test map). I made a test compile for my map and I was fustrated for waiting the make scale part...
    After some search in the web and forums I decided to add these commands for my hlrad :

    hlrad.exe -sparse -low

    -sparse uses less memory on the visibility matrix part
    Try this it will help the make scales part at least on my comp.
    -low I use it because it works best for my comp. Does not use all the memory and cpu for compile.

    And I got 150mb error log file and could not open it and my map did not run.... spend like 10hours to try to solve all my things! That was like grr..r... ****.... <!--emo&::asrifle::--><img src='http://www.unknownworlds.com/forums/html/emoticons/asrifle.gif' border='0' style='vertical-align:middle' alt='asrifle.gif'><!--endemo--> <!--emo&::gorge::--><img src='http://www.unknownworlds.com/forums/html/emoticons/pudgy.gif' border='0' style='vertical-align:middle' alt='pudgy.gif'><!--endemo-->

    Well I know what causes the errors and I compiled my map now and it works fine. But still needs to fix the error parts. Thehee
  • JedediahJedediah Join Date: 2003-01-27 Member: 12847Members
    I posted about my similar problem a while ago. MakeScales was gobbling up 2GB of RAM.. way more than the supposed 384MB maximum. I think it's a problem with memory allocation efficiency in the tools. I have no idea how to track it down though.
  • InfinityInfinity And beyond&#33; Join Date: 2002-01-25 Member: 50Members
    its very recent, just added a new section to the map and boom! makescales that usually takes a minute or two now takes infinitely...

    i only have 64mb ram, pc66 :/ very old pc, no money for any upgrades

    im at 46000~ patches
  • CageyCagey Ex-Unknown Worlds Programmer Join Date: 2002-11-15 Member: 8829Members, Retired Developer, NS1 Playtester, Constellation
    <!--QuoteBegin--Jedediah+Feb 17 2003, 06:36 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Jedediah @ Feb 17 2003, 06:36 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> I posted about my similar problem a while ago. MakeScales was gobbling up 2GB of RAM.. way more than the supposed 384MB maximum. I think it's a problem with memory allocation efficiency in the tools. I have no idea how to track it down though. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    There are some known memory leaks spread in HLCSG (commented as //TODO: this leaks), and it's possible that there are equivalent uncommented problems in HLRAD. Looking at memory allocations that aren't disposed of immediately would be a good place to start.

    My current focus is the clip hull generation of HLCSG, but I'd like to take a good look at HLRAD next, since it's eating up 90% of most people's compile time and is also the most resource greedy process. I was able to find some simple optimizations in HLCSG that really speed things up, and there may be similar gains possible in the lighting code.
  • DroggogDroggog Random Pubber Join Date: 2002-11-01 Member: 3293Members, Constellation
    I have the exact same problem. Just added a (small) room to my map and boom, makescales-time went to 2-3 mins to ... more than 10 hours <!--emo&:0--><img src='http://www.unknownworlds.com/forums/html/emoticons/wow.gif' border='0' style='vertical-align:middle' alt='wow.gif'><!--endemo--> and it wasnt finished, i stopped the process. I'll need to buy some more ram as i only have 128mo and i'm running under XP (on amd 1800 0.13mic.). Oh and QuArK is using 21.150K of ram too <!--emo&???--><img src='http://www.unknownworlds.com/forums/html/emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif'><!--endemo--> oww eheh.

    In the meantime i'll try -bounce 0 or -sparse or whatever could speedup the process, like running the compile tools in a command prompt. 39.460 patches here.
  • InfinityInfinity And beyond&#33; Join Date: 2002-01-25 Member: 50Members
    magically enough, qrad seems to do it just fine... but qrad is not the accepted solution, not for me that is
  • OlljOllj our themepark-stalking nightmare Fade Join Date: 2002-12-12 Member: 10696Members
    <!--QuoteBegin--Droggog+Feb 18 2003, 04:04 AM--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Droggog @ Feb 18 2003, 04:04 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> ...Oh and QuArK is using 21.150K of ram too <!--emo&???--><img src='http://www.unknownworlds.com/forums/html/emoticons/confused.gif' border='0' style='vertical-align:middle' alt='confused.gif'><!--endemo--> oww eheh.

    In the meantime i'll try -bounce 0 or -sparse or whatever could speedup the process, like running the compile tools in a command prompt. 39.460 patches here. <!--QuoteEnd--> </td></tr></table><span class='postcolor'> <!--QuoteEEnd-->
    use a batch compiler.
    - needs little less ram
    - runs faster
    - more compfort
  • tommy14tommy14 Join Date: 2002-11-15 Member: 8839Members
    Infinity, the 64M RAM, that is low for HL mapping of any kind, and way, WAY low for NS. 128 is usually considered the min, and even that can give problems. as has been mentioned, the worst case for a large map with many light variations is 384Mb.....and NS tends to "peg the limits" often. i would consider 256 the min for NS map compiling myself, and 512 better yet....otherwise RAD will go to swapfile with the HD and that really slows things down to a crawl.

    general thoughts on why light maps go up (if i am in any error, i am sure cagey will clarify...grin):

    1. basicly it is how much light & how many KINDS of light hit how many patches. fewer lights and fewer patches = smaller light map = less RAD work and faster compile. this is what zoner help file said:
    <!--QuoteBegin--></span><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->The vismatrix hlrad needs to run takes exponentially more RAM as the vismatrix grows. The formula is 'number of patches' squared, then divided by 16. This number is how many bytes it will consume. The maximum is 65535 patches, so the maximum vismatrix is 256Mb of RAM. <!--QuoteEnd--></td></tr></table><span class='postcolor'><!--QuoteEEnd-->
    so according to the formula if you double the size of your map, expect 4x the work. 4x the map = 16x the work. and so on. EXPONENTIAL.

    2. another factor is VIS. yep, doing VIS -full helps RAD work better because the vis matrix is more precise, so the semi-raytracing part of the light calc is easier to do. but also YOUR visblocking in the level is key for RAD, not just for r_speeds, but because YOUR visblocking chops up the light into little easier calc sections.

    3. how many lights is also a factor. you get up to 4 STYLES shining on a patch, one is locked in as the steady always on lights, and up to 3 flickering/switchable/changing lights. with the always on lights = 1 use, +1 switchable = 2x the light map for that patch, +2 = 4x and +3switchable (max) = 8x the lightmap.
    so you are not just multiply the work by the SQUARE as it says in #1, but also by the number of lights.
    example:
    -one largish room of 100 patches and 1 texture light = (100x100/16)x1 = 625 operations
    -add 2 switchalbe light = (100x100/16)x4 = 2500 operations
    -add a 100 patch corridor the lights also shine in = (200x200/16)x4 = 10,000 operations. (note, NOT 10,000 vis matrix, the matrix is 2,500kb right now....)
    (<i>but i just added 2 switchable lights and a corridor! 625 to 10,000!!! </i>)

    <b>fixes</b>
    - less patches. just like scaling up a texture makes the wpoly bigger/fewer, scaling up a texture also makes the patches bigger/fewer. bigger patches tend to look worse though.
    - less variable lights.
    - do not "skybox in" a level to stop leaks, it keeps 1000s of extra patches you do not need. fight the leak war!
    - <b>If you have not boxed in your level, then the #1 fix is running HLRAD with the -sparse flag - but compile will be slower.</b>
    - Using -chop values larger than the default 64 for hlrad will cause the light patches to be larger. However, for values larger than around 96 the map's lights start looking bad, and will more prominently show the 'staircase' effect on shadows.
    - buy more RAM if you are under 256.
    - visblock your level better, and compile with VIS -full. vis blocking is not just for r_speeds!
    - like you do the wpoly fight, texturize details instead of making small brushes. each small brush is another patch series, as well as extra wpolys.
    - use the NULL texture on unseen faces. it not only reduces wpolys, it reduces patches....

    i hope this all is of some use.....
  • ChromeAngelChromeAngel Join Date: 2002-01-24 Member: 14Members, NS1 Playtester, Contributor
    As a workaround I believe DrunkenBozo was offering an overnight compiling service on some extremely powerfull hardware. If you can bring yourself to hand your .MAP over to someone else this might solve your problem.
  • BlackPantherBlackPanther Join Date: 2002-02-11 Member: 197Members
    hehe when i compile my big maps, i boot my computer in dos to make sure i use all of my RAM <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html/emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif'><!--endemo-->
  • Lord_RequiemLord_Requiem Join Date: 2002-11-20 Member: 9481Members
    Except the problem arises that the ZHLT package requires windows to be running, it may run in a dos box but its still a windows program. You can't run it in dos.

    RTFM.

    Req
Sign In or Register to comment.