I answered some items around a similar question awhile back:
Additionally - Microsoft does highly recommend using dism.exe now for Windows 10 deployments as it is optimized for it over imagex.exe (so this is simply them saying, use their new tech [dism] instead of their old tech [imagex]. You can deploy a wim using either. They provide no recommendation about sector vs file based, so that will come down to your choice. We have switched from using imageW to dism (wim) for our deployments via LD and have seen decent improvements on the speed of deployment when using dism instead of imagex when we did our testing.
At the end of the day it comes down to how you want to use it. ImageW was built for XP/Windows 7 where people would build a thick image and then defrag them, sector based imaging would save that defraged end state and clone it exactly onto each client, with Windows 10 - it handles fragmentation and SSDs much better and this isn't really a concern anymore.
My personal recommendation is to switch to dism.exe (wim) if you are deploying Windows 10 just because it is industry standard, and the benefits from imageW are fading out. You may not be using the newer benefits of dism/wim yet but switching now, will allow you to take advantage of them in the future easier if that time comes. One of these major benefits is bitlocker pre-provisioning, which allows you to encrypt your disk while dism is applying the image, which means it adds literally 0 time onto your image to encrypt it, instead of waiting 30-40 minutes after imaging for bitlocker to run. This was our driving factor to switch ourselves. Our current image takes about 45 minutes to run and that includes all bios/uefi automation, bitlocker encryption, wim deployment via dism, drivers, apps, configs, domain binding, etc.
As for stability - "more stable OS down the road with less issues" - I have never heard this before and seems baseless to me, if anything imageW would provide a more stable image since it is a true exact copy at the sector level, but really doesn't matter, both will provide the same end result in stability if used properly.
Hope this helps,
I am of the same mindset that it shouldn't really make a difference in stability, but I've been tasked with tracking this down.
I agree that ImageW was built for older OS's and moving on to the newer methodology could be beneficial. I've done a bit of googling around for information on how to use DISM to run a scripted install and ended up in a rabbit hole. Most of the stuff that I ran across was in relation to using SCCM or MDT which leaves out a lot of information I would need to implement it with LD.
We just recently went through a project to encrypt laptop drives, so having the encryption built into the image deployment process would be a good reason to change methods. Could you point me to documentation on other benefits of using DISM?
We are going to be starting on our Windows 10 upgrade process shortly. Would you mind sharing your template to give a noob on DISM a head start?
Thank you for your response.
If you choose to use imagex, its natively supported by LD through the deploy image action:
For dism, you select Other:
So you would format the disk per usual: (landesk handles the magic on its own)
Then you can configure the settings for Deploy Image:
Then you can tell it to create the system boot files via an execute command action:
C:\Windows\System32\bcdboot C:\Windows /l en-US
There shouldn't be any scripting really to do this.
If you have issues with it auto-mapping the drives, I find pre-mapping the drives just before the deploy image step and using drive letters work as well, but breaks mutlicast.
Right now I do not have the steps documented to pre-provision bitlocker in LD, but I will publish something eventually and let you know.
For now you can refer to this SCCM version - most of the difficulty is actually in BIOS automation to enable the TPM on systems that its not on for - but this can be worked around by having your techs manually log into the bios and enabling it before imaging until you can automate that step as well. Some systems come with their TPM already enabled.
Hope this helps,
It's always interesting to get other people's take on this.
We (well I really) switched from TBI to WIM pretty much with the switch from OSD to Provisioning on the advent of rolling out Windows 7. For me the benefits of using WIM files are that I can create a monthly windows image with the latest updates 'rolled into' it, that way our desktop guys don't have to run through WSUS on each PC build. The main benefit is that the image is much closer to a 'Lite Touch' or 'Zero Touch' image, if you get the provisioning sequence sorted, and is easy to customise, whereas a TBI is more like an 'old school' Ghost image; once taken that image is set in stone until you create the next one.
Also if you use the ImageX and the wim, you could also pre-encrypt the drive with bitlocker which takes about 10 seconds in the template before the image lays down. It would save
~30-60 minutes depending on SSD or spindle drives for the encryption.
Thank you for all the information you've added to this thread. I will be referring back to it quite often.
The sense I'm getting is that whether people are using a TBI or WIM file, they have walked through the initial setup (probably going into audit mode) to create the image file that is subsequently captured and deployed.
What I am doing is taking the Windows installation ISO, mounting it to a bare metal vm, launching setup and going into audit mode to install .NET, C++ libraries and updates. I generalize the vm and shut it down to capture with ImageW. I do snapshot the vm before generalizing it so that I can revert to it and install the latest updates and then recapture afterwards. I haven't tried using DISM to inject updates yet, but I don't find my process to be terribly time consuming on a monthly basis.
What is being put forth as "more reliable" is doing a completely scripted install (via unattend files) using the install.wim file from the Windows install DVD and then doing all the installs of apps, patches, C++ libraries as tasks in the unattend file. The theory is that this will produce a fresh/clean install of the OS on each individual machine making the machine more reliable over the life of it.
I don't see that one way would be more reliable than the other, but that is what I'm trying to figure out. I'm also being told that the scripted "fresh install" is the Microsoft recommended process and that more PC vendors are moving that way. All of that, I am unsure of. Is anyone doing a scripted/fresh install rather than capturing a thin image?
I would say that it is a matter of preference. Per MS documentation for deploying Windows 10 via MDT/SCCM ( Create a Windows 10 reference image (Windows 10) ) they recommend doing the scripted install via unattend to audit mode with-in a virtual environment, then running sysprep, and then capturing a reference .wim to deploy via MDT/SCCM. There were actually some courses at MS Ignite this year that discussed automating this process. This process is what it appears you are already doing manually.
MS also documents deploying applications and settings after laying down the image via a Task Sequence, not as additional actions with-in the unattend. This would be the equivalent of using a provisioning template to install applications after OS deployment.
As for the tasks within the unattend file, one thing to understand is the actual hooks that are occurring from MDT/LDMS during a TS or Provisioning task. Regardless of you ending up using setup.exe and passing in the unattend with your .wim to do a "scripted" installation or if you pre-sysprep a reference wim and deploy it with dism as MS's documentation shows; when the deployment is running it will lay down the OS bits on the disk and end up with a C:\Windows\Panther\unattend.xml file. This is the file that gets executed during the 7th pass/OOBE regardless of which mechanism you used to get to this point and the additional actions are asynchronously/synchronously executed, typically resulting in the SetupComplete.cmd in running. This is where MDT/LDMS inject commands that tells the unattend to kick off their task sequences. This is likely the same place that you would choose to add your commands, since it means the OS is fully booted and the commands will run as the system account. So in this sense, at a high level there is no difference in having LDMS install the applications, patches, C++, etc or manually adding them into the unattend.xml as actions to be ran. When you get deeper into the process though, you will realize that manually adding the actions to the unattend will actually limit you since these commands run once, you cannot easily add any logic, reboots, edits, etc. The benefit of using MDT/LDMS to do these deployments are that they are able to handle logic, reboots, log errors to a central location instead of just to the unattend log locally on the disk, etc. This is literally their purpose, so I would avoid cramming all of your tasks steps into the unattend and have to edit an unattend file for every single change you want to make.
I will re-iterate my first comment: I would say that it is a matter of preference. - there are lots of ways to do things, you just have to pick which one is the most effective for you.
I would have who ever is driving this change provide documentation from MS for why it is more reliable and better to do a "scripted" install of the OS every single time and to embed the install commands directly with-in the unattend and present their case.
Hope this helps,
This is what I was looking to find out and really what I suspected.
I will look into building the base image as a WIM file. ImageW is an older technology and there is logic in moving to DISM for future compatibility.
Thank you for all the information. If I could give a double star for feedback to indicate the degree of informativeness, I definitely would do that!