File Director - Planning the number of appliances required

Version 4

    Verified Product Versions

    File Director 3.6File Director 3.5File Director 3.0File Director 2.0File Director 4.0File Director 4.1File Director 4.2File Director 4.3File Director 4.4File Director 4.5File Director 2018.1

    The Ivanti File Director platform is enterprise ready, and can be scaled to accommodate the syncing requirements of any size organisation.


    Unless deploying to a test environment, we recommend that all production deployments are configured with at least 2 nodes (for resiliency/redundancy purposes)

    An often asked question is how many appliances do I require to service x number of users. Unfortunately this is not a straightforward calculation, since every environment is different, both in terms of the features used, and the end user data and usage patterns.


    In a default deployment, with Windows client deployed and 'steady state' (non-onboarding users who periodically interact with files in the map point folder) an appliance can typically support in the region of 3.5k concurrent users. There are many factors which can influence this figure, as detailed below


    Features which may contribute to platform load, thereby impacting scalability


    - Kerberos Single-sign on

    - Shared map points (server locking)

    - Platform Notifications (a configurable notification technology which we recommend disabling in Enterprise environments - read more below)*

    - In-location sync / Mapped drives configuration

    - PST sync

    - Delta syncs

    - Manual vs Automatic map points


    Environmental considerations which may impact scalability


    - SMB version in use on file server (SMB3 is more efficient than SMB2)

    - Usage pattern /concurrency of end users (Eg users in same timezone/working hours vs distributed)

    - Number of files

    - Frequency of file changes (churn)

    - Size of files

    - Performance (throughput/latency) of storage

    - Performance of hypervisor

    - Performance (throughput/latency) of network

    - Number of map points

    - Max size of Kerberos token configured (where Kerberos SSO used - smaller the better)

    - Number of concurrent onboarding users (syncing for the first time, or receiving replacement hardware)

    - Uneven load balancing


    The most accurate way to size a deployment, is to deploy a pilot to a sample volume of users against a small number of appliances, and monitor appliance utilisation to evaluate how close to capacity the platform is, and plan the final number of appliances using this data.

    The best metric for establishing appliance load is to monitor the number of worker threads busy over time. This can be obtained by downloading the appliance diagnostic logs from the Admin Console and reviewing the Perfmon counters contained within the archive. These are in CSV format, and so can be inspected with Excel or Splunk. The column of interest is called Client_Threads_Busy


    The File Director appliance can accept a large number of client connections (a 'connection' in this context would be a request via the REST API, such as an upload of a file) These connections are actioned by worker threads, which are tuned for optimal throughput and therefore finite (up to 400 worker threads can be active concurrently). An appliance which has 400 busy threads will cause new connections to queue and await a free thread resulting in a delay to the client request. If this happens often, it indicates the appliance is over capacity and the platform needs to be scaled further by adding more appliances.


    *File Director has a 'notification' feature designed to increase the propagation rate of changes made by other DataNow clients to managed content on the user's endpoint. Each client that makes a change to content via File Director invokes a write to the cluster. This value is then polled for by every online client every 30 seconds. This feature was designed with small environments in mind, and is not well suited to large enterprise environments where the polling and cluster writes can significantly affect scalability. As such, we recommend that read and write notifications are disabled for best performance.


    Without notifications in place, content changed by other users is discovered when:

    - A user logs on

    - A user opens the file

    - A user opens a folder containing the file

    - A user refreshes Explorer if the folder containing the file is already open

    - A user performs an operation on a sibling file or folder


    To disable 'write' notifications, please contact Ivanti support, quoting


    This involves issuing the following command from the shell of a support-mode activated appliance: curl http://localhost:8081/admin/globalPolicy -d'notification_mode=0'


    Windows client read notifications (these are not used in Mac/iOS clients) can be controlled using the following setting:




    This is a DWORD containing the number of milliseconds between polls. It cannot be disabled, however, can be set to a higher value than the default 60000 (1 minute) value. We suggest 24 hours, which equates to 86400000