Native Apps At The Client & Cloud

Srinivasan Sundara Rajan

Subscribe to Srinivasan Sundara Rajan: eMailAlertsEmail Alerts
Get Srinivasan Sundara Rajan: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Marketing and Sales, Marketing Automation, Cloud Data Analytics, Big Data on Ulitzer, Data Lakes News

Data Lakes: Blog Feed Post

Data Lake: Save Me More Money vs. Make Me More Money By @Schmarzo | @BigDataExpo #BigData

The data lake is a centralized repository for all the organization’s data of interest whether internally or externally generated

2016 will be the year of the data lake. But I expect that much of 2016 data lake efforts will be focused on activities and projects that save the company more money. That is okay from a foundation perspective, but IT and Business will both miss the bigger opportunity to leverage the data lake (and its associated analytics) to make the company more money.

This blog examines an approach that allows organizations to quickly achieve some “save me more money” cost benefits from their data lake without losing sight of the bigger “make me more money” payoff – by coupling the data lake with data science to optimize key business processes, uncover new monetization opportunities and create a more compelling and differentiated customer experience.

Let’s start by quickly reviewing the concept of a data lake.

The Data Lake
The data lake is a centralized repository for all the organization’s data of interest, whether internally or externally generated. The data lake frees the advanced analytics and data science teams from being held captive to the data volume (detailed transactional history at the individual level), variety (structured and unstructured data) and velocity (real-time/right-time) constraints of the data warehouse. The data lake provides a line of demarcation that supports the traditional business intelligence/data warehouse environment (for operational and management reporting and dashboards) while enabling the organization’s new advanced analytics and data science capabilities (see Figure 1).

Bill 1

Figure 1: The Data Lake

The viability of a data lake was enabled by many factors including:

  • The development of Hadoop as a scale-out processing environment. Hadoop was developed and perfected by internet giants such as Google, Yahoo, eBay and Facebook to store, manage and analyze petabytes of web, search and social media data.
  • The dramatic cost savings using open source software (Hadoop, MapReduce, Pig, Python, HBase, etc.) running on commodity servers that yields a 20x to 50x cost advantage over traditional, proprietary data warehousing technologies .
  • The ability to load data as-is, which means that a schema does NOT need to be created prior to loading the data. This supports the rapid ingestion and analysis of a wide variety of structured and unstructured data sources.

The characteristics of a data lake include:

  • Ingest. Capture data from wide range of traditional (operational, transactional) and new sources (structured and unstructured) as-is
  • Store. Store all your data in one environment for cross-functional business analysis
  • Analyze. Support the analytics and data science to uncover new customer, product, and operational insights
  • Surface. Empower front-line employees and managers, and drive a more profitable customer engagement leveraging customer, product and operational insights
  • Act. Integrate analytic insights into operational (Finance, Manufacturing, Marketing, Sales Force, Procurement, Logistics) and management (Business Intelligence reports and dashboards) systems

Data Lake Foundation: Save Me More Money
Most companies today have some level of experience with Hadoop. And many of these companies are embracing the data lake in order to drive costs out of the organization. Some of these “save me more money” areas include:

  • Data enrichment and data transformation for activities such as converting unstructured text fields into a structured format or creating new composite metrics such as recency, frequency and sequencing of customer activities.
  • ETL (Extract, Transform, Load) offload from the data warehouse. It is estimated that ETL jobs consume 40% to 80% of all the data warehouse cycles. Organizations can realize an immediate value by moving the ETL jobs off of the expensive data warehouse to the data lake.
  • Data Archiving, which provides a lower-cost way to archive or store data for historical, compliance or regulatory purposes
  • Data discovery and data visualization that supports the ability to rapidly explore and visualize a wide variety of structured and unstructured data sources.
  • Data warehouse replacement. A growing number of organizations are leveraging open-source technologies such as Hive, HBase, HAWQ and Impala to move their business intelligence workloads off of the traditional RDBMS-based data warehouse to the Hadoop-based data lake.

These customers are dealing with what I will call “data lake 1.0,” which is a technology stack that includes storage, compute and Hadoop. The savings from these “Save me more money” activities can be nice with a Return on Investment (ROI) typically in the 10% to 20% range. But if organizations stop there, then they are leaving the 5x to 10x ROI projects on the table. Do I have your attention now?

Data Lake Game-changer: Make Me More Money
Leading organizations are transitioning their data lakes to what I call “data lake 2.0” which includes the data lake 1.0 technology foundation (storage, compute, Hadoop) plus the capabilities necessary to build business-centric, analytics-enabled applications. These additional data lake 2.0 capabilities include data science, data visualization, data governance, data engineering and application development. Data lake 2.0 supports the rapid development of analytics-enabled applications, built upon the Analytics “Hub and Spoke” data lake architecture that I introduced in my blog “Why Do I Need A Data Lake?” (see Figure 2).

Bill blog2

Figure 2: Analytics Hub and Spoke Architecture

Data lake 2.0 and the Analytics “Hub and Spoke” architecture supports the development of a wide range of analytics-enabled applications including:

  • Customer Acquisition
  • Customer Retention
  • Predictive Maintenance
  • Marketing Effectiveness
  • Customer Lifetime Value
  • Demand Forecasting
  • Network Optimization
  • Risk Reduction
  • Load Balancing
  • “Smart” Products
  • Pricing Optimization
  • Yield Optimization
  • Theft Reduction
  • Revenue Protection

Note: Some organizations (public sector, federal, military, etc.) don’t really have a “make me more money” charter; so for these organizations, the focus should be on “make me more efficient.”

Big Data Value Iceberg
The game-changing business value enabled big data isn’t found in the technology-centric data lake 1.0, or the top of the iceberg. Like an iceberg, the bigger business opportunities are hiding just under the surface in data lake 2.0 (see figure 3).

bill blog3

Figure 3: Data Lake Value Iceberg

The “Save Me More Money” projects are the typical domain of IT, and that is what data lake 1.0 can deliver. However if your organization is interested in the 10x-20x ROI “Make Me More Money” opportunities, then your organization needs to aggressively continue down the data lake path to get to data lake 2.0.
10x-20x ROI projects…do I have your attention now?

Data Lake: Save Me More Money vs. Make Me More Money
Bill Schmarzo

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business”, is responsible for setting the strategy and defining the Big Data service line offerings and capabilities for the EMC Global Services organization. As part of Bill’s CTO charter, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He’s written several white papers, avid blogger and is a frequent speaker on the use of Big Data and advanced analytics to power organization’s key business initiatives. He also teaches the “Big Data MBA” at the University of San Francisco School of Management.

Bill has nearly three decades of experience in data warehousing, BI and analytics. Bill authored EMC’s Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements, and co-authored with Ralph Kimball a series of articles on analytic applications. Bill has served on The Data Warehouse Institute’s faculty as the head of the analytic applications curriculum.

Previously, Bill was the Vice President of Advertiser Analytics at Yahoo and the Vice President of Analytic Applications at Business Objects.