I too am having a bandwidth issue. This is a local installation so it's not crossing a WAN link and I have turned off all throttling - when I copy the file to the test machine it takes less than 2 minutes - with LanDesk it takes 7 hrs or more.
Zman, that is the patch that the LANDesk tech and I copied the .dll out of. We are on SP1 at this time and it will not "install" on SP1 so we copied the needed file over.
That file did speed things up a bit, but it still seems slower then it should be
Time to call the tech back and tell them it is not working.
Please set the Download > Bandwidth Throttling to 50%, and no delay packets. We are being sensitive to Internet traffic.
Just an update,
I have have been working with LD on this, found if I used Software Dist, that all worked well, but if I tried to patch, it would drop the download speed to 40 kb/s
If I changed brokerconfig.exe from the dynamic setting to use the gateway only, the download speed was great.
If I set brokerconfig to "dynamically determine the connection route", then the repair task, vulscan, etc, keeps resetting the connection and or keeps try to connect to the core after every bit of information is sent, fails and then goes to the gateway, and keeps repeating this.
I expect with what I have found and my TAM is replicating that LD should be able to fix this.
Message was edited by: James Marriott
This is an SP2 update. Was your Landesk tech able to duplicate this issue in SP2? In a lot of cases you just cannot take a file and assume it will be backward compatible. I had the same problem you are describing and as soon as I applied the update everything worked well. I would make sure that the person looking into this for you has sp2 applied not sp1 only.
Sorry, I never updated the fact that I was trying this originally on Dev Core that was SP1, like our production ones, and just copied the .dll, now the latest tests were done on our SP2 Dev Core with the actual patch installed.
Our TAM has been able to duplicate this now and has submitted it to the engineering folks
This problem exists in LDMS v2016.3! 9.6 SP3
Seriously... once a crippling problem like this is identified and corrected, it should NEVER come back. Someone should have a sticky note on their desk saying "Remember to check CSA file transfer speed after every new build!!"
I was battling with super-slow (30-40 KB/sec) downloads through the CSA for the last 2 days, a file took 6 hours to download over a 50 megabit internet pipe. For the last few years I have been wondering why CSA downloads always seem slower than they should be ...and I finally discovered this document. Convinced that an 8 year old non-viable workaround couldn't possibly have any effect on a piece of recent code, I changed from "dynamic" to "connect using CSA" and sure enough, suddenly I'm getting 20-40 MEGABIT transfer speeds and the same file that took 6 hours yesterday only took a minute today. Problem is, we can't just change every Agent to CSA-only mode since the Agent will be crippled when it comes back on the LAN. Dynamic is the correct setting, but it's broken!
Windows 10 cumulative patches and 5GB ISO files are never gonna finish downloading at 40 k/sec over an intermittent internet connection.
Running LDMS 2016.3 with CSA 4.3 and having the same downloading speed problem...
First troubleshooting Apple DEP with CSA for 2-3 days and now this... this is bad... come on LANDESK you can do better than this.
We are still suffering this now in 2016.3 with SU3 installed. CSA is really slow at downloading patches and this is over a LAN link just for our DMZ servers.
Major issue really.
I made a support case on this in April.
Ivanti will fix this bug in version 2017.3.
Answer from support:
We are looking at a way to improve proxyhost as we believe its doing a check on every chunk since lddownload uses curl to download the files.
I got confirmation from Production Team that mentioned defect has been approved and it is under development.
However version 2017.1 is already under testing so it is too late to fix it this version.
Target release has been set to 2017.3 version which is planned for Q3 2017.
We have found an answer to this although this is only going to work within a DMZ/LAN environment probably.
We found that the preferred server dat file in the local sdmcache folder on the clients had our LAN configuration in it. This is fine in a way but we use a DFS share on the LAN and this is configured as the dfs share name.
Works great on the LAN but the DMZ servers don't know how to get to the LAN dfs share or what it is even.
By adding the actual server name into the local preferredservers.dat file on the client the patching speed was just as on the LAN. Minutes instead of hours.
I have now added a new preferred server into the core with the IP address ranges of the DMZ servers but it takes 24 hours for this to take effect on the Core I believe so we will see what happens once that is done.
It may help others that are having this issue with local DMZ servers.