Honeycomb (https://honeycomb.io) processes 100 billion incoming data events per day into a datastore with 2 million columns — all in anticipation of that moment when an engineer will wake up at 3 am and blearily ask, “what crashed this time?”. That engineer needs to get interactive, sub-second response time to their queries. How do we serve these sleep-deprived Sherlock Holmeses at a reasonable price? We do so by constraining the problem — finding ways to make tradeoffs that reflect our users’ needs and priorities.

In this talk, I’ll discuss how Honeycomb continues to evolve its storage and query engines in tandem with its user experience. By understanding our users better, we can make decisions about data, storage, and query engine that allow us to offer a powerful tool for exploring big data, and for continuing to scale.


Danyel Fisher is the User Data Expert for Honeycomb.io, a company that provides observability services to help engineers ask complex questions of their data. Before he joined Honeycomb in 2018, he worked at Microsoft Research on data visualization and human computer interaction. He received his PhD from the University of California, Irvine, in 2004. His areas of interest include big data analytics and progressive analytics, approximate querying, and understanding user experience around data.



State-of-the art simulations and experiments capture processes and phenomena in multiple high-resolution fields. The richness of the data provides an unprecedented opportunity to gain new insights, but also introduces various challenges for the analysis. This talk will focus on visualization approaches from recent work for making sense of large spatio-temporal data as well as ensembles. I will focus on three aspects in particular: (1) visual mappings for expressive presentation, (2) ML-based approaches and other measures to assess similarities and enable search, as well as (3) systems for large-scale exploration including in situ visualization on supercomputers and large displays.


Steffen Frey received his PhD degree in computer science from the University of Stuttgart in 2014 and worked as a postdoctoral researcher at the visualization research center (VISUS). Since 2020, he is an assistant professor at the Bernoulli Institute at the University of Groningen, Netherlands. His research interests are in visualization methods for increasingly large quantities of scientific data. In particular, he has made contributions to the analysis of time-dependent data, machine learning for visualization, in situ visualization, and the dynamic steering and performance prediction of visual computing systems. He is a member of the Steering Committee of EGPGV (Eurographics Symposium on Parallel Graphics and Visualization) as well as WOIV (International Workshop on In Situ Visualization), and he serves as a papers co-chair for the PacificVis Visualization Notes and IEEE LDAV this year.