1 of 1 people found this helpful
The TMC-service gets used for peer-download requests / serving as well - so it's possible it's up to that?
The place to check would be the log-files - they'd live on the client(s) in this location by default - "C:\ProgramData\LANDesk\Log\" - or wherever you moved ProgramData to.
You can get additional information enabled by turning on debug-logging (that may paint a clearer picture) as well. Information on that can be found here:
I'm also attaching annotated logs that I've written up for a different customer, that shows you how to interpret requests for / serving of files for peer download, for instance.
This will give you a good idea as to what's going on.
You may also want to check the SDCLIENT-logs (C:\Program Files (x86)\LANDesk\LDClient\Data\) based on date/time-stamp ... possible someone / something is scheduling downloads with alternate delivery methods or whatnot?
Thank you Paul!
Will look for that and will come back!
Okay that helps me very Much.
I got many many entries like:
ProcessMcastChannelIAmRepResponse my TieBreakGuid:b417cad5-0f8d-4291-83c7-465ceef418ef, the client that sent message TieBreakGuid:007d88f3-5792-4695-8a66-3947c8615ec2, for SelfElectId: 19ef4c3d-6757-48ed-96d7-48ba1caf40d2, my current state: Not chosen
SendOutMyCertificate(): Sending out my client public cert
Received request message for my public certificate
ProcessPublicCertResponseMessage, index=1, total=1, Source=172.26.73.102
It looks I have a problem with the public certificates here.
Did not configured something about them.
Log.7z.zip 25.5 K
LANDESK Support points me to the new Feature "Self-Electing subnet services" (SESS)
a Service that I allready had an eye one, because everything that is new is a potential suspect.
I will disable the service our clients and will see what happens.
I'm curious as to what you find. I'm seeing this a lot as well.
1 of 1 people found this helpful
After disabling SeeS (Agent Settings -> Connectivity Settings) on all Clients
The tmcsvc.exe seems to be very stable again and does not use that much traffic and CPU anymore:
This advices from LANDESK Support pointed to the right direction for us:
Please make sure that all of your clients using the same connectivity settings - there may be still some machines where this setting is not updated and they generate multicast traffic (there are about 70 clients listed in the 'xtrace-tmcsvc.log' file which you provided earlier).
To check the agent-connectivity-settings on the client you can have a look in the XML ("ClientConnectivityBehavior_LDCORE2016_vxxxxx.xml:
To find how the SESS feature is written in the config file for client connectivity settings. It is mentioned there as "csep" and the actual lines in your xml file are:
1 of 1 people found this helpful
I've not seen the self-electing services clog up either CPU or network performance (since the self-election isn't super intensive and doesn't happen every second) ... so an interesting sighting for sure.
Having trawled through the conversation here - I don't think you've stated patch level you're on (or I missed it) - so it's possible that's something that got fixed after initial release (while I saw updates to the self-electing files, there doesn't seem to have been much of a defect similar to this that was mentioned). Could be just a "bad luck" combination that maybe newer files don't have?
Now that things are working, once you get to a more relaxed stage, you might want to do limited spot-testing/checking to see if updates to the files will behave the same in your environment, or not. Could be a situational bug (as I'd expect a lot more sightings if it were a general thing) ... hard to say. Anyway, at least you're able to get on with other things now as well .
yes I missed to mention that point..
We first started 2016 agent-rollout after patching the new Core Server to the SU, SU2 and SU3 Patch-Level.
Maybe on thing could be a hint. I noticed that the high CPU was not on our remote locations with small and well segementet subnets with less than 254 Clients per subnet. I only saw it in our main-location with about 2000 Clients and the core server in one ip-subnet.
Unfortunatelly time is rare to check all possibilities.
Just to elaborate a little bit on this....
We actually had this issue arise today after having version 2017.1 installed for over a month. Starting yesterday at 6:00am, the amount of Multicast on the network skyrocketed and actually affected our production systems performance. It was traced back to Endpoint Manager / LANDesk. I followed this same advice to correct the issue, but what this thread does not make clear is that even though you made the change, the clients will not pick up the change for up to 24 hours. To make the change immediate, you must schedule a task. To to this, from the Agent Settings section where you turned off SESS, select the "create a task" drop down (the calendar looking option with the clock) , choose "Change Settings".
On the Client Connectivity Settings, select it and from the drop down, select the new setting you made. I made mine to the Default Server Connectivity Settings as you can see below.
Choose Save when done and a task will be created.
Add your clients to the task and run it. It processes very fast. Probably 10 minutes for several thousand systems in my case. And almost immediately we saw a huge drop in multicast traffic and our system performance returned to normal.
I have a call with an engineer scheduled for tomorrow to review root cause. Hope this helps!
Minor correction / clarification.
Clients *always* check for any updates to their agent settings "whenever vulscan runs". So depending on how frequently you do / do not run it, that may affect your results here. But yes - that's your "magic bullet" - it's the execution of vulscan (of ANY kind) that'll check for & pull down updated versions of agent settings that apply to the client(s).
If - rather than updating an existing agent behaviour, you want to roll out a different one, you can do that too via the "update / change agent settings" task.
... I'll add a screenshot tomorrow for clarity, when I'm less shattered.
For those affected - I would still ask to work on support with this.
If this is affecting folks, then we need to make both support and/or dev/PM aware of this being a problem so that we can look at what's causing it and how to fix this sort of situation.
A worksaround of "let's just not use CSEP" might be OK for some things, but if you guys would need (say) Provisioning / PXE-reps on remote sites ... you'd need to have CSEP enabled (could do a single named host per subnet, but that sort of defeats the point of it being resilient via election).
So - "getting this right" does make sense, as having a workaround is fine (as long as it works) ... but if said workaround starts affecting corporate tools such as provisioning, that tends to be rather less than pleasant.
Might be worth referring support to this thread / listing any kind of defect ID in this thread (when you have one), so that anyone else who might be affected can just directly request to be added to "that issue" (which should make things smoother for everyone involved I hope). .
It is sooooo frustrating.
To use the new feature "Agent State" I activated SESS on our new 2017.3 SU2Server.
As long we got only a few Agents out with this setting, it was no problem.
But now we have more than 600 Clients in ONE subnet and the Problems are starting again.
HighCPU Usage on Clients
And the nework gets flooded by multicast-traffic.
I will raise a ticket (again) at support, but the last time they only told me to disable SESS
Does anyone have an update on this? I just had the same issue over here with a new deployment. Out of nowhere users started seeing it. It causes mouse/keyboard input to get choppy and the computer to come to a halt.