Recently I had the opportunity to participate in a joint webinar with Mark McKinney, Director Enterprise Analytics at Sprint and Sanjay Vyas, CEO and founder of Diyotta. During the webinar, Mark shared a number of his learnings on this journey to modernize Sprint’s Data Architecture. What became very clear in the conversation was that becoming data driven demanded a change of technology, culture and some business processes. Below are some of my key takeaways from the webinar.
It starts with business outcomes
One of the key talking points was around the business outcomes of using analytics smarter or better. In the case of Sprint there was a heavy focus on how they could drive a richer and more personalized customer experience if they could harness all the customer related data that they had available – traditional application data, network data and also the newer data types from social media, web click streams. Knowing everything you can about the customer, whether they buy on the web or in a regular store, drives better business outcomes.
Satisfying the appetite for analytics
In the case of Sprint, there has always been an appetite for analytics. What is changing in the modernization journey for the users is that they now have more complete customer data available than ever. And along with that, IT needed to find a way of helping the users help themselves when it come to finding and preparing data for their purposes. There was just no way for the traditional approach of funneling every data request via IT would scale.
Implementing the plan
In his parting remarks, Mark outlined two key aspects that were critical in their implementation journey:
Planning and scoping upfront is of paramount importance. He said “you have to think big, act small, and move fast!” This meant there has to be sufficient planning and scoping of the long term needs, like how will we do real-time data ingest, how do we scale not just storage but compute. But then you need to be able to iterate fast when you begin to implement and show results.
Communication and expectation management was critical to ensure everyone gets on the same page and stay on the same page as the project unfolds. Often, with changing business needs one forget your original decision criteria.
Questions from the session
We had a number of interesting questions that came up in the Q&A portion:
Sponsorship and ownership of the initiative: The key sponsor was in the business – the owner of digital. Obviously other stakeholders from finance and IT were involved. But it started with business ownership.
Adaptability of traditional enterprise data warehouse (EDW) and ETL skills: Modernization and selecting new technologies does not mean your old skills become redundant. Quite the opposite. In many cases the foundational knowledge your team already have will be enough for them to pick up modern data integration tools very quickly.
Examples of richer customer experiences: One of the key focus points was to gain a much more complete insight of customer preferences, behaviors, usage patterns (voice vs data) to personal the customer’s experience when they interact through a sales or support channel.
Best practice for where data strategists and data scientists report: Increasingly, the data strategists are part of or closely located with the enterprise and data architects. It is important for the data strategy to a holistic strategy and that data can be leveraged by all departments. Data scientists increasingly are located in the business units to create deeper and better linkage of business questions to analytical insights. Most of these projects are iterative and the ongoing engagement is critical. This of course drive the need for self-service and governance and security capabilities of the data platforms you implement.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, HAWQ, Zeppelin, Atlas, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.