Service demands on IT at remote and branch offices is no different than at central offices – except there are often fewer IT staff to handle requests and they’re often siloed from HQ. In today’s data driven environment operating in siloes makes little sense. In fact consolidation and integration is key to reducing cost, complexity, managing regulatory risk and delivering a seamless customer experience.
One crucial opportunity to knock down these siloes is the opportunity to take the content from the local branch office and add it to the wider organisations ‘big data’ analytics programs. Data lakes are key to this, offering a way to assemble information created across a business, including content important from third party sources and services. And regular readers will know that I believe that big data analytics is vital for spotting and solving previously unaddressed issues and capitalising on emerging trends.
New technologies are emerging which enable organisations to add local branch office data to its central ‘big data’ lake more efficiently and cost-effectively than ever before, all in a way which enables the deployment of new technology without disrupting users, with the flexibility to roll-back at their discretion.
The benefits are obvious, but here are four things to consider before using these solutions to bring your branch office into your data lake architecture:
- A unified architecture will simplify management and reduce operational costs: plan to consolidate to the same scale-out platforms you have within your data centre if possible. This will reduce the burden of operational management and make centralised management more straightforward.
- Set your policies for your regulatory and customer risk profile: recovery options within a data lake give you a lot more flexibility to ensure you’re aligned with your particular industry’s demands. For example, if it is important to retain multiple ‘states’ of your key data at different stages during each working day
- Deliver multiprotocol support: you may end up experimenting and/or building big data analytics applications on a number of different platforms, including Hadoop, Hortonworks, Cloudera etc., so you’ll need your data infrastructure to support them.
- Capitalise on the value of software: the hardware you’ll need is largely “commodity;” it’s the software and the updates that come down to it over the years that will deliver value. So look to find the features, scale, control you need and build out an enterprise license agreement that covers the lifetime of your project.
There are of course other things you need to consider, so let me know what tops your list in the comments.
I’ll also be discussing this and other issues around getting analytics programmes moving at our upcoming Big Data Chat on 15 December with several other esteemed panellists – please log on and join the conversation.
Cross posted from my LinkedIn page.