1 of 1 people found this helpful
Well here's my opinion on this and as usual, I will expect other luminaries to add there views as well.
When considering making changes to a process, I would be looking at a couple of things about the changes that you're planning in order to determine the best approach. None of these would involve anything remotely near the back end of the product. It might work, but you know that if there is chance for something to fowl, up it will, and support won't look on that favourably.
The most significant consideration that you need to look at is the impact that the change to process will make on existing instances of that process. There are certain aspects that Process Manager will just not let you do, specifically removal of statuses from a live process. Assuming that is not the thing you're doing the next consideration is if the new parts to the process rely on a piece of data that only new instances will have. So for instance, on the main process form you add a boolean that you use to decide some process flow. You need to be aware that all previous instances of the process will have that value as a NULL and any flow control will treat them as such. - There are techniques within 7.4.0 that can help you overcome that.
So presuming you're clear on those, I suggest you still take a copy into a Dev/Test environment and work through the design changes. As you correctly say, you won't be able to design transfer the whole process. What you need to do is to design transfer all of the new elements of the design and document the steps needed to implement them. Go to your live environment and import the designs and and use the steps to update the process. So if you create a new collection and associated window and you then add the action, transfer the window and object and then complete the rest in the live environment. I would always suggest that the changes are carried out at a scheduled outage period and not while the system is live and with users. If you build in dev based on a copy of live the transfer should work fine.
Overall, the best approach is to have a Dev environment that you use within a release cycle approach where you snapshot live, develop and build design transfer file, have a live outage to update and then snapshot the new environment and start the cycle over again.
The method that suits me best is to take a copy of the process, make all amendments, DT that and then update all the open IPC's to use the new process using SQL.
Its a trade off between having to remember / document exactly what you need to update in the live process if you only DT the elements you need, and the time it takes to DT the whole thing, and where it makes sense to do the bulk of your testing / uat activities. You obviously have to test the DT bit as well so you can be sure that it all fits together and that you have compensate the old open IPCs to work correctly with the new elements and calculations.
(One of the things I always have to do is set a default value on the old IPCs for any new booleans created)
The main reason why I don't like linking up new elements in my production database is because then I have to a do a lot of testing during the service outage before I'm happy with it, and invariably I miss a couple of small things that I then have to redo at a later stage. If you DT the whole thing, then that testing and config can mostly be done prior to your service outage, and you just have to test that your new process is in use by the old IPC's.
The easy way is to let the old processes run their course and start new ones with a copy of the changed process as the new default. That sort of depends on the number of instances of the process that are around.
I wasn't aware that flicking the GUI over at the backend was a supported function, BTW.
However I wonder if you could use a bulk action to reinitialise all your 'old' active processes to the new one? If so, that gives you a way of DTing the new copied/changed process into prod and smoothly jumping the old instances of the process to the new. It's just a thought, but it's worth looking at
The approach I take is that if the process change is minor (including not needing much downtime to repeat in LIVE system) then wherever possible I try to change the main process (of course after taking a SQL backup). After implementing the process change I then optionally take another SQL backup. I do a quick end-to-end test of the new process and then revert the SQL DB back to the state immediately after the process chage was made; do this way so that LIVE system not filled up with Test IPC's BUT the process is quickly verified.
I only do new instances of processes when a new major release is required or the change itself it major.
The reason I do this is that we use VIEWS heavily and these are in a lot of cases the rules are tied to the process GUID. So I put in a new process and my view need updating too. I cant just update existing views though as these need to remain in place for the old process so you then need to duplicate all of the views as well. And DT'ing views is sometimes touch-and-go too IMHO. Now if view rules could accept a "collection" of process GUIDS then the whole world would be a simpler place....
Looks like there's no one best answer to this problem, but thanks for all the ideas. I'll just have to pick the right one depending on how big the changes are.
You're probably right on that. It really does depend on the nature and scale of the work you're undertaking.