4 Replies Latest reply on Oct 31, 2011 9:57 PM by Mido

    Where are the logs for scheduled tasks stored?


      Sometimes when you are viewing scheduled tasks in the console you have the option to right-click on a device and choose to view the log. A log file then opens but has "tmp" related file name. If I choose FIle|Save As, I am prompted to save on my computer instead of a location where this log may be stored. Does anyone know where these logs are stored? These are not stored in the Ldlog folder on the core and these logs are not being immediately pulled from the client because I can view logs for computers that are not currently reachable.

        • 1. Re: Where are the logs for scheduled tasks stored?
          Catalysttgj Expert

          Take a look at this table: Distributiontasklog in your DB.


          Hope that helps!


          • 2. Re: Where are the logs for scheduled tasks stored?

            Thanks! This is going to be fun trying to create queries between this table and it looks like at least LD_Tasks. I am just learning SQL and I cannot seem to grasp the join concept (esp. inner and outer) without looking it up everytime I need it 

            • 3. Re: Where are the logs for scheduled tasks stored?
              Catalysttgj Expert

              Haha.. yeah, i hear that.. My rule of thumb is LEFT OUTER JOIN, almost always.. INNER is a lossy, if i'm not mistaken.


              I've been playing a bit more with the SQL stuff now.


              So far... i've made:


              MostRecentDates stored procedure along with a MostRecentDate function. (the stored procedure runs every 5 minutes to grab most recent dates and place them in a temporary table for quick access.)

              I can call this function to give me the most recent scans (5 minute resolution) for any scan, vulscan, inventory, custom defs, etc.

              This gives me a way to find a computer out in the field thats working and that i can go review very quickly.


              PatchFriday function that i can call and it will give me a specific date to compare to vulscan dates for any given month and year.

              This way i can take a look and see what computers are mostly likely patched based on their last scan times.

              We generally autofix patches to the environment by the Friday following patch tuesday, hence the reason to call it patch friday.


              BINAddress stored procedure along with a BINaddress field in the Networksoftware table, and extended landesk schema to show it.

              This will generate the Built-in NIC address for each computer, and place it in a new "Built-in NIC Address" column.

              Now those pesky fake MAC addresses that come from air cards, VPN adapters, and the like are no longer a problem. We can see the true physical address of the machine in a single column in any query. Eventually, I want to use this to replace what is in the NIC address field, so we can turn on MAC address duplication deletion.


              Currently, working on a way to generate an .ini file for every computer to be used as a way to granularly schedule every item in local scheduler down to the second, so we can eliminate the need for "RSTART", or random delays. This way we can evenly place each computers scan times in its own unique time space, preventing overload on the core, and making a predictable result for all scans. I'm working on this from a stored procedure point of view right now, but there will also be a chunk of code that runs in the logon script on the machine that will do the local scheduler resetting. We already have this system working to a point, but right now the spacing is not working out as well as i want it to, because we're relying on a calculation done on the name of the computer to determine a number of seconds to add to an initial start time for each scan. This method works, but its directly tied to the computer name, and that's no good if the names change too much on the next lease renewal. I want to create way to dynamical generate schedules for every machine, and then push out the update. Pretty much similar to the way power manage schemes work in 9.02.