CRIBL CO-FOUNDER DISCUSSES OBSERVABILITY TRENDS (PART 2)

This blog post is the second and final part of article that about an interview we had with Dritan Bitincka, co-founder of Cribl, an Observability Infrastructure vendor.  The interview proceeded on three different tracks: (a) Mr. Bitincka’s Journey to Cribl, and (b) Industry Changes, and (c) The Future of Observability and Cribl.  This part of the article captures the last two tracks.

Industry Changes.  We shifted gears to the topic of what changes he has seen in the industry.  Dritan said, on the top of his mind, that he is amazed at how fast observability tooling and solutions are being adopted.  He explained that part of the reason is that now that most of them are cloud-based, organizations can validate the value of the solutions very fast.  Before cloud was so popular, he said, it took months with salespersons doing bake-offs of solutions and customizing each premises-based environment.  Now, with cloud-based, standardized systems, it takes days or weeks, allowing for decisions to be made right away.   The basic message is that with cloud-based systems, customers can now do self-service and can self-evaluate systems.  The second message was that the volume of data stored for observability systems is growing about 30% annually.  But, if storage was free and infinite, his customers tell him they would storage 5-10x more.  This desire to store more is a big part of what Cribl is betting on pursuing in the Observability Infrastructure market.  Third, Dritan sees many observability tools companies getting funded – the space is hot, he says (we agree, and AI applications will only push this further).

I asked a bit more about the cloud, and we discussed the cloud version of LogStream that just went to general availability in October.  In this model, Cribl offers LogStream as a service for its customers.  Mr. Bitincka said the interest in this service has been phenomenal and that there are a ton of proofs of concept trials underway.  There is particular interest from organizations that don’t have operations teams.  

The Future of Observability and Cribl.  Then we shifted to where the observability market is headed over the next 3-5 years.  Dritan said that his customers want to instrument as much data as possible, including directly from applications as they run.  He explained that Cribl offers AppScope as open-source, allowing users to collect performance data from all sorts of applications.  He said that roughly 20% of applications at organizations are fully instrumented using Application Performance Management (APM) because most programs are close-sourced, and it is difficult to instrument them.  His view is that there will be an emergence of tooling of applications that goes well beyond that of simple agents that reside on devices throughout the computing, networking, and security systems that are used.  By peering into applications very deeply, this will cause a deluge of data that customers can currently use LogStream to handle, control and route.  The location where all this new data will reside, Dritan calls it an Observability Lake.  Once all this data is saved to long-term storage, he says, all the teams in the organization can self-service access it, and potentially forever.  A significant advantage to using such an approach is that, in contrast to a database system where you must know ahead of time how to structure the data, with an Observability Lake approach, the teams can investigate incidents and data there were not expected and play these events back on recent or very old data.  I was interested to learn how AppScope works.  Dritan explained that AppScope is a black-box instrumentation technology that sits between the operating system and the application. It sees all the interactions between the application and the filesystem, the CPU, the network etc.. It captures all the metadata associated with this traffic and forwards it downstream.  And, he explained that it doesn’t matter what language the program is written in, whether it is Ruby, C, Java, etc., because AppScope is just intercepting syscalls.  I got the sense from Mr. Bitincka that Cribl is betting that AppScope will challenge the agent-based approach that is so common these days.

​I challenged Mr. Bitincka and asked what Cribl’s plans in tackling new observability infrastructure challenges are.  What I learned was that the company plans to enhance both LogStream and AppScope further.  For LogStream, there are 60 integrations with other systems – the number will grow, and there will be more protocol support and more device support, even including IoT systems.  The company’s goal, according to Dritan, is to help customers with the “data generation” phase – how to unearth more data. Customers agree, with Cribl getting many more inbound requests from prospects as they drive towards pervasive and ubiquitous instrumentation.  So, the company’s next focus is to generate data at a large scale and in a standard way.  He said the company will double-down on AppScope and thereby will develop a universal edge collection system.  The idea behind this edge collection system is that it would remove the headaches customers get from collecting, processing, and managing observability data.  Our take on this edge collection system strategy is that Cribl will start competing with some more traditional observability vendors who have developed their own agents that reside on computing systems.  But, if Dritan’s right that his customers use as many as a dozen tools, potentially all having different agents, then Cribl’s single-collection strategy could prove valuable to customers.  This new data collection capability from Cribl would allow for simplified data collection and consolidated Observability Lake storage, thereby allowing customers to use all the analytical tools they want.