In my previous article, we explored some key elements for a successful digital transformation of the IT landscape. We understood the core importance of business requirements to lead such transformation but with a “LEAN” approach in the way to define and prioritise them.
Still, this first journey didn’t help us yet to check if implementation of standard suites would help or not such transformation.
But what are the characteristics of Standard applications that may make the difference from self-developed platforms?
Standard suites for cost effectiveness?
The usual arguments when it comes to selecting a standard suite are related to risk management and budget.
On one hand, people consider that a standard platform will face less bugs since amount of people using it is higher. That is actually logic: the more users you have, the quicker you find what doesn’t work, and therefore the quicker you are to fix it. As a consequence, one may think that using a standard platform will make his landscape much more reliable and therefore better reply to the customer demand.
Still, a company IT architecture is not made of one platform. Even while using a full suite coming from the same vendor, the need of interface with “foreign” systems will rise up soon or later. Usually, issues are then coming from these interfaces.
So, yes, the risk of business interruption may be reduced but not removed.
On the other hand, cost is often a valid argument considering that a standard platform will cost less than an application developed internally. Often, the licence cost is indeed lower than the estimate of a similar internal project. But do we really know how much this will cost? (see as example this nice article: “How Much Does a Typical ERP Implementation Cost?“). The business case has to take in consideration as well implementation costs (how many consultants will you need to implement and customize the fancy new tool? And for how many months?), recurring costs, and finally “learning costs”: while internal IT is fully aware of the languages and framework used for their “own” tools, setup of a standard package may imply for them to get the skills from outside.
Then experience shows that, if global costs of a standard suite may be lower than setup of a self-developed application, the difference is not that high once everything is taken in account.
The decision to go for a standard platform may then rely on other, more strategical, factors.
One reason could be the willingness to put in place a modular approach of the Enterprise Architecture. The implementation of standard software will indeed force IT to create interface modules independent from the applications themselves. By this, there is a guarantee that interconnections between the different elements will be broken and somehow cleaned up. Using instead a new self developed application will most probably drive the developers to code new links to other applications directly in the core of the software since it is usually quicker and easier.
Another reason could be to think about a possible future move to the Cloud: by having already put in place standard platforms, migration to a Cloud should be facilitated at a second step while internal software may require review of their code before planning such switch.
The choice to reduce the variety of skills within the IT team could be a good one as well if the move to standard is strong and large enough.
But such choice will come with some weaknesses as well, main one being the dependency to the vendor generated by replacing some core application by their solutions. For small players, this is only a fact to accept. No real way to escape. For bigger actors, one solution to avoid such issue is to get involved in the package development itself through some partnerships. SAP for instance use to have this kind of approach with its’ bigger customers.
In any case, such approach will require from management a strong support to make sure no business unit would initiate a “parallel market” to get around.
Self developed applications for higher flexibility?
On the other side, we have the in-house solutions, usually very well supported by users since they reply “closer” to their requirements. That is of course not the best argument ever as we learnt before that requirements need to be managed with caution to not create a new gas factory.
The real advantage of an in-house application is that it helps to keep the knowledge internally: people who will design and develop the application will be employees in most of the cases, and therefore more aware about the content while an issue is occurring, or when there is a change to apply.
It has also the capacity to support higher personalisation, not recommended in the back end for reasons already expressed, but giving the opportunity to be different from the others on the front-end. Developed through an Agile approach, result can be more attractive and original than what competitors will offer to their customers.
The main weaknesses of these solutions are obvious: the need to have a dedicated team with the right skills to maintain it. Usually, this is driving to have many applications developed with different technologies and therefore requiring to maintain a large panel of experienced developers more or less specialised in each part of the architecture.
Technology may also become obsolete after some time, making the maintenance of the tools more challenging, the easiness to find appropriate specialists less obvious, and then the need to be replaced mandatory. Without removing 100% of the risk, standard platforms use to offer a more reliable approach from this point of view thanks to the regular releases delivered.
Again, there is nothing wrong neither to go in this direction. Only, this will require very strong governance in order to refrain appetite of business departments for fancy features on one hand, and keep consistency in the variety and age of technologies used to develop the different platforms on the other hand.
The real key: adopting a modular architecture
Based on these views, it seems that none of the solution is perfect. Therefore, we could say that both approaches are perfectly valuable depending on the global strategy applied to IT without having a direct effect on the opportunity to go for digital.
The point is actually better on the way such platforms, standard or not, will be implemented: how the architecture has been thought. And this has a real effect on the capacity to gain in effectiveness.
Juergen Jakowski, from SAP, has written a very interesting article about it where he shows how Enterprise Architecture may have effect on the User Experience (UX).
Best way to achieve this is to proceed first to the setup of a multi-layer landscape, where legacy systems and core databases are well separated from the distribution layer, and then to the front-end through different interfaces. This has by the way a very valuable side effect since it helps to handle the integration of old legacy to new environnement through tools dedicated to it. I’m thinking for instance of PEGA or Cameleon for example.
Then architecture should move from a product oriented setup to a customer oriented structure. This could be executed by splitting every piece of application by single product on one hand, and single target group on the other hand. By target group, I understand the type of customer targeted but also the type of end device considered. This way, each application would become a building block to be added or not to the final proposition requested by business. Gain in time to market becomes then quite obvious.
In such approach, the main challenge will probably be the interfacing that needs to be properly separated to each application. For this, standard software may help keeping things strict enough. But if IT manages to stay the course, self-developped solutions can be part of the game as well.