This log file (MalformedXml.errorLog) is huge in all three of our tenants. Everything seems to be working fine, so I haven't been too concerned with the contents. I periodically delete them, and the systems create new ones, but the get extremely large. Can anyone shed some light on this log?
Does anyone from Ivanti have an answer?
Thanks for posting to the Ivanti Community.
Sorry that it seems no one has the answer to be able to assist you on this yet. Please do consider the other ways to engage with us to get assistance:
Thanks for responding Michael.
So, can I infer from your comment that this is not a 'normal' thing and other users are not seeing the same issue?
I think this is "normal" behaviour: In our environments, these files get extremely large, too, although everything is working fine. Therefore, we just delete them from time to time.
I would log a support ticket and find out what the log is for. Personally I havens investigated it.
3 of 3 people found this helpful
Logging Service writes an entry to this file when it runs into an XmlException in a method called ReadNextLogEntry(). The exception is apparently raised when a logfile entry is somewhat malformed.
You can check your Windows Application Event Log for a corresponding entry. On my system Logging Service frequently complains about 'log4net' being an undeclared prefix.
Those malformed entries probably do not make it to the Logs table in ISM.
We have the same problem, logfiles are growing +/- 4GB a day.
Is there a setting we can adjust to stop this logging
Usually we got such file after push operations or after direct SQL backup/restore DB from one tenant to another.
After that we need to delete this file and restart Logging service.
1 of 1 people found this helpful
My experience with this issue, is that when the log file is growing, the data you expect to be in the Logs workspace is not getting loaded there, and is getting loaded into this file instead.
According to support, the way to stop it from writing to the log file, and write to the Logs workspace, is to restart the Logging Service. Ivanti is investigating this issue, but doesn't see the issue in the cloud offering, so they have setup lab premise servers to do testing, but have not been able to capture when it happens in their lab tenants, making it difficult to diagnose and fix.
The unfortunate part, is that the service is in running status, but it isn't doing anything useful, so relying on the service status is not helpful. If you have a scale-out deployment, this can occur on one server and not the other(s). In addition to items getting into the log file when it is having this issue, they are also written to the Windows Application event log. Suggestions are to have a scheduled job that mines the event log for events, and when found, restart the service.
I just came upon this thread and thought I should comment. Please see this KB which has a workaround watchdog that can be placed on a timer to "kick" the logging service when it goes haywire.
Please note that Development is still tracking this and trying to find a final solution to this bug, however, it is an elusive one to get fixed as it is nearly impossible to cause this to happen on purpose.