If you are like many other pharmaceutical, life science, and healthcare companies, you're probably asking yourself: Is it time to make a substantial move to the cloud? If so, what data, solutions, and infrastructure should be migrated? Do you launch with a “big bang" approach, or one step at a time? How do you make sure your data and intellectual property (IP) are safe and secure? How will you develop and maintain integrations with existing on-premise data sources, applications, and solutions? What operational timeframe should you commit to? Days? Months? Years? And how will you anticipate yearly operational costs in an on-demand, elastic, and managed service model?
It seems like every life science and healthcare organization is wrestling with these questions and their variations. Why? Because cloud-based computing and solutions have definitely arrived in the marketplace, and their promise and value is simply too large for companies to ignore, particularly given the rapid acceleration of today's competitive landscape. No longer is the question "if" the cloud should be a part of your strategy and approach, but "what," "how," and "when" systems and new solutions should be deployed. At the same time, the pressure to get something going in order to deliver the proverbial "quick win" has never been greater.
However, the unfortunate truth is that most decision-makers err on the side of being too risk averse, and get caught in the all-too-familiar trap of trying to gather enough information to be certain of success. The questions posed above are open-ended and not easily answered with definitive certainty; and in fact the answers are totally dependent on an assumed business environment context. One size does not fit all, and the answers will change as market dynamics change. If that is not acknowledged and understood up front, than the organization will spiral into a seemingly endless cycle of design and planning without any implementation and doing. In short, decision-makers land in the dreaded paralysis-by-analysis zone where ideals—the perfect solution, the big win—impede initiating some really great solutions. Don't fall into the trap of letting perfect get in the way of great!
One very effective method for reducing risk while increasing the chances of success with cloud deployments is to take a stepwise, tactical, lean start-up approach. Keep a firm and steady eye on the long-term strategic plan, but work to get there by starting small and growing via incremental steps. The good news is that cloud is made to order for this methodology: storage and compute capacity are elastic and on-demand; and the necessary infrastructure environment can be enabled in a very short timeframe, e.g., hours to days, instead of weeks to months. In other words, these attributes make it quite easy to utilize the cloud as a “virtual elastic sandbox" in which to test ideas, tinker with a proof-of-concept (POC), or investigate new ways of doing business without making a huge strategic investment or commitment. Instead, the approach can be tested and proven on a small scale, and then seamlessly and almost immediately scaled up for larger production purposes. Conversely, ideas and approaches that don't work out can be quickly terminated without wasting a lot of time and resources. This ability to "fail fast" greatly reduces risk, while the linear on-demand scalability and elasticity of the cloud allows successful projects and programs to be brought online and integrated into new “business as usual" in a streamlined and efficient way.
The cloud also enables nimble and agile development and testing cycles. Beta and prototype versions can be modified, improved, and tested very quickly. This makes it much more likely for the end solution to hit the mark with end-users right from the start. Again, this approach minimizes risk by reducing the tendency for organizations to try and get all possible features into the first release of a solution, which can needlessly extend the build and development time. With a cloud-based lean start-up approach, there is implicit recognition that multiple releases are easily accomplished within the same calendar year, and that there is no need to try and make the solution perfect from the beginning. In short, the solution will grow and evolve in a much more natural and organic way based on changing market conditions and end-user needs.
All of the above sounds good in theory, but what are some practical examples that are being tested and implemented by today's informed decision-makers? Here are three ideas to get started:
- Instead of spending time and effort building data marts from existing source data systems to get commercial sales force effectiveness data in an analytics or reporting-ready format, utilize the cloud as a data quality and governance layer for your standard performance dashboards (KPIs) and ad hoc reporting needs. Since the cloud is elastic, new data sets can be easily added on the fly, and storage and compute capacity scaled, as market conditions dictate. In short, there is no more need to worry about performance being bogged down due to server capacity being exceeded.
- Similar to the above, if your company has several disparate legacy CRM and marketing data systems due to a history of mergers and acquisitions, don't try to merge those systems. Leave the legacy environments in place and simply extract the relevant data into a cloud-based Hadoop environment. Perform your data quality and cleansing in the cloud, and then utilize the data for its intended purpose. There is no need to try and merge all legacy data together. Extract only the data needed for the task, and leave the rest in place. This is an excellent way to make the leap to the cloud in a manner that is very nimble and cost-effective.
- Set up a cloud environment as a big data analytic discovery sandbox. Finding and taking action on new insights for competitive advantage is more important than ever. In order to accomplish this, it is very important for organizations to set up a formal environment for discovery analytics. This is different from production or operational analytics which run on a routine basis. Discovery analytics require the ability to rapidly mine data and correlate many potential variables on huge data sets which can be augmented on the fly. A good use-case example here is within the health economics and outcomes research areas. Again, the cloud is an excellent way to accomplish this because of the elasticity and scalability of storage and compute capacity. That means you don't run the risk of oversizing or undersizing the environment. Rather, the environment expands and contracts on-demand, so you only pay for what you use.
Each of the above examples can be implemented on a small scale, and then increased as success dictates, or quickly killed in a fail fast approach. Either way, you maximize your chances of finding, developing, and implementing some truly great solutions while managing and minimizing your risk. Who knows, you just might find that the incremental, measured, cloud-based approach gets you to the perfect solution after all.