LANDESK is moving all of their services slowly over to what is called self-electing session based services.
Traditionally you have had to pick a device to be a pxe rep, to be a XDD device scanning the network, or for example a MDR for multicast. This method has some architectural problems to it where if you assign a specific device in a subnet to be your pxe rep and that device happens to lose network, or goes offline; your pxe service is no longer functioning.
With self-electing session based services you tell a group of devices (or let LANDESK pick based on a points system) which devices should be hosting these services. So you would tell LANDESK that you need a PXE rep in subnet A. LANDESK then goes out and finds the best candidate devices (maybe the top 3-4) to host the PXE services on and all of those devices install the PXE services but with them disabled. Next one of those systems calls out their "score", who ever has the highest score (or in ties, calls it out first) activates their PXE services. If another device comes online and calls out that they have a higher score than an existing "active" device, then it will take the service from that device and it will go back to being disabled. The benefit here is that then LANDESK just automatically keeps the service up at all times for you, and self manages who is hosting the service. If you want you can specify the service should only be running on x, y, and z devices/servers but still gain the high availability benefits that were lacking with the old model.
So how does LANDESK determine the points or score for a device? It would give 1 point for being a desktop (more likely to stay in a subnet over a laptop), it might give points for a faster network card, or more memory, etc.
Now you are probably saying; that is great and all but how does that relate to multicast? It is basically the exact same process that I just described above that LANDESK will be adding for PXE in the near future that already exists for Multicast Domain Representatives. Currently LANDESK will look in each subnet and automatically decide who is the best fit to off load the multicast session management to. I believe each subnet gets assigned 2-3 dynamically as needed. If one goes offline, another client is elected to take its place. The great benefit with the self-electing session based service here is that in the past all of the session management for scheduled tasks were handled by the core and had to use web traffic to communicate it; this is no longer the case. This typically made the core slow down, and would cap out a fairly low number of nodes it could manage the installations for at a time, like say 200 at a time. With the new model - you are getting all of the same benefits as before, except that you no longer have to pick the MDRs, you gain High Availability of the multicast service, and the session management is now offloaded to the temporary MDR so that it takes all of the load off of the core while it manages the installations and reports them back to the core. This means that you could now have many thousands of sessions going for installations instead of only a few hundred.
So for your more specific questions above:
A. WOL would be sent to computers from there.
- This should be unaffected; however you should now have the added bonus of the high availability and not needing to pre-configure a MDR in specific subnets. LANDESK will just automatically create one on the fly as needed and remove them when its done.
B. Patches would be sent there then to computers and the same for software.
- You will now see a major improvement in the speed of software and patch deployment using the new session based MDRs because they can have thousands of clients doing installations at the same time instead of a few hundred and having to queue them up. Software is still deployed to the MDRs then peer to peer spread throughout the subnet, just like before.
Expect to see the majority of their services move to this new model.
Hope this helps to answer your questions,
Thanks for your reply.
With the self-electing as the default method for assigning a representative, would it still be possible to assign a representative? I have a scenario where the branch doesn't warrant a PPS, but wanted to save bandwidth by using Multicast/peer download. But the security policy only allows a single PC to access to the core server subnet - and this PC needs to cache the package on behalf of the other managed clients.
Is that possible?
What you do is this:
- You roll out (in a "cache only" mode) the package to the "1 device per location".
- Now - having the package in their local cache - those devices will act automatically as peers / MDR's (though technically more "peer download) because they already have the package in their local cache.
... sounds like that would address your particular problem?
Thanks for the suggestion. It does address the caching/make available the package aspect.
I am wondering if this can be done with just 1 task. From the LDMS 2016 agent settings screen, it seem both PPS and Multicast can be enabled.
And from this article, it seems to refer to the MDR downloading from a "package server".
From this 2 pieces of information, it looks like with the use of self-organizing election, that an MDR can automatically work with a PPS because the MDR is just a sub process of the deployment task, where the first download can be forwarded to the PPS?
So - let me clarify a few concepts so that the language may make a little more sense to you . Especially since TMC changed MASSIVELY since 9.5 days .
PPS / Preferred Package Server -- That's a bit of tech that "just" flips download paths from "http://MyServer/MyShare/MyFile.exe" over to "http://MyPREFERREDServer/MyShare/MyFile.exe" based on the clients' location. No paths change - it's "just" the server name that flips (and you get given a potentially different set of credentials, but I'm trying to keep it simple here)
MDR / Multicast Domain Reps
*USED* to be a semi-elected process. Up to & including 9.5, The Core would check whether there were a statically configured MDR in the respective broadcast domain that was on - and if not, it'd hold an "election". The "election" being essentially a broad ping sweep & the first to respond would be "it" / the MDR for (ideally) the whole multicast session. If it would go down, a new election would take place ... so the Core had to keep track of multicast / file-transfer windows across X many MDR's and other things on top ... LOTS of micro management,
This all changed drastically with 9.6 (and onward, obviously). We've essentially moved the burden from the Core over to the clients. Essentially how multicast works with 9.6 onward is as follows (I simplify things a bit, but not much):
- Core either sends out a notification to ("push") or client(s) otherwise decide to check for policies.
- Client(s) see that a new policy is targeted at them.
- Clients download the new policy & respond to it. I.e. - "Aha - I am to download package X, using distribution method Y".
- If multicast is enabled, then the client(s) enter a self-organised multicast. What is THAT you ask? Here goes...
Essentially a case of the following gets hollered over the subnet / broadcast domain - "Hey GUYS - I am going to download Package X. Does anyone already have the package in their cache?"
If someone already has the package/file (or a partial), then that'll get used & copied over. Assuming no bits of the package exist on the local network, an "actual" multicast would take place. That looks like so.
"OK Guys - none of you slackers have the package already. So I'm going to download the file. I'm going to wait a few minutes and give you time to join my multicast session if you want to. I'll multicast the package locally to you if you subscribe."
This is that (agent) setting here in this screenshot:
... so "a" client will download the package from source / from the remote PPS & then multicast the package around the local subnet / broadcast domain. If other devices pick up on the same policy (because you pushed the task out / told the clients to check on policies), they'll subscribe to the multicast & become members of it. The downloading client will multicast the package as he downloads it ...
... if the downloading guy (and acting MDR) goes down, someone else picks up the slack & continues. The Core (in the meantime) has much less to do, since clients know "what they need to download & how they're told to do so" & "simply" sort it out amongst themselves, rather than having to be micro managed as they were in the past.
In regards to the other settings (PPS / Multicast & so on) - that's just "opt in / opt out" stuff. You CAN choose to enable the PPS to be used as a source - up to you. You CAN choose to enable a self-organised multicast, but peer download works just as well, if you already have the package in the respective environment .
The self organising multicast stuff is of particular benefit to the "Accelerated" push methods, where we get VERY aggressive (and can poke 10,000-s of devices within seconds and tell them "hey you - check for policies") ... and it's for that scenario where the self-organised multicast really shines.
So - "CAN" this be with one task? Yep - welcome to roll-out projects (assuming you're on 2016).
Since you'll be doing this regularly (I'm guessing) - just set this up as a rollout project template & then you can distribute to your intended (effective) MDR's up-front ... and once you've got your desired success rate, you can start the actual job .
Does that help you out?
As an addendum:
* The "use multicast - yes / no" is essentially just a case of "once you HAVE the package, how do I distribute it around my subnet" situation.
* The use of a PPS is "Where do I *GET* the package from" - i.e. "Just the configured source" or "this is replicated, and I can use my PPS - yes/no".
Hope that clarifies .
Yes, that helped a lot. Thanks for clarifying.
The Rollout Project bit is a great suggestion. Haven't got my head around that, and this is a perfect use case.