Apr 15, 2020
Today, whether through lift-and-shift or re-architecting, almost every large enterprise has carried out some migration to the cloud, with increasing pressure on IT operations to manage a hybrid big data architecture effectively.
The trending problem: Large enterprises are running enormous big data clusters with thousands of nodes that require full-stack visibility to optimize application performance, support SLAs, uncover infrastructure inefficiencies and minimize MTTR (mean time to repair). They need to make Spark apps run faster and stop Hadoop clusters from blowing up. They need to deal with any malfunctioning workload as quickly as humanly possible.
The solution: In the quest to control cloud spend, analytics are key. To meet budgets, organizations need more transparency to determine usage patterns; understand average peak computing demand; map storage patterns; determine the number of core processors required; treat nonproduction and virtualized workloads with care, and more.
According to Gartner, “Most monitoring solutions, while valuable in their own right, have not optimized the process of troubleshooting performance and availability problems. Users often complain of limited visibility with too few tools or too much complexity with too many tools, and everything in between.”
Big data teams ultimately need powerful in-depth insights to visualize and optimize big data operations instantly, at scale, access a single dashboard for all big data environments, and initiation of automated infrastructure optimization.
Ash Munshi, CEO of Pepperdata joins me on Tech Talks Daily to discuss the importance of removing the blindfold to control cloud spend. We also discuss how businesses can stay rightsized post-migration.