Hubert Baniecki

University of WarsawWarsaw University of TechnologyMI².AI

h.baniecki (at) uw.edu.pl

I am a 3rd year PhD student in Computer Science at the University of Warsaw advised by Przemyslaw Biecek. In spring 2024, I was a visiting researcher at LMU Munich hosted by Bernd Bischl and Giuseppe Casalicchio. Prior, I completed a Master’s degree in Data Science at Warsaw University of Technology.

My main research interests are machine learning interpretability & explainable AI, with a particular emphasis on efficient explanation estimation and the robustness of post-hoc explanations to adversarial attacks. I also care about human-model interaction with applications in medicine, and support the development of several open-source Python & R packages for building predictive models responsibly.

I received the 2022 John M. Chambers Statistical Software Award.


recent news [previous]

2025 Jan A paper Efficient and accurate explanation estimation with distribution compression is accepted as a Spotlight at ICLR 2025 (notable 5% of submissions).
2024 Nov A paper Increasing phosphorus loss despite widespread concentration decline in US rivers is published in the Proceedings of the National Academy of Sciences.
2024 Nov A paper Interpretable machine learning for time-to-event prediction in medicine and healthcare is accepted for publication in the Artificial Intelligence in Medicine journal.
2024 Sep A paper shapiq: Shapley interactions for machine learning is accepted at NeurIPS 2024.
2024 Aug A paper Aggregated attributions for explanatory analysis of 3D segmentation models is accepted as an Oral at WACV 2025 (notable 9% of submissions).

selected publications [full list]

    1. ICLR Spotlight
      Efficient and accurate explanation estimation with distribution compression
      H. Baniecki, G. Casalicchio, B. Bischl, P. Biecek
      ICLR 2025 (Spotlight)
      Compress then explain: Sample-efficient estimation of feature attributions, importance, effects.
    2. On the robustness of global feature effect explanations
      H. Baniecki, G. Casalicchio, B. Bischl, P. Biecek
      ECML PKDD 2024
      Theoretical bounds for the robustness of feature effects to data and model perturbations.
    3. The grammar of interactive explanatory model analysis
      H. Baniecki, D. Parzych, P. Biecek
      Data Mining and Knowledge Discovery, 2023
      Interactive explanation of a model improves the performance of human decision making.
    4. Fooling partial dependence via data poisoning
      H. Baniecki, W. Kretowicz, P. Biecek
      ECML PKDD 2022
      Feature effect explanations can be manipulated in an adversarial manner.
    5. dalex: Responsible machine learning with interactive explainability and fairness in Python
      H. Baniecki, W. Kretowicz, P. Piatyszek, J. Wisniewski, P. Biecek
      Journal of Machine Learning Research, 2021
      2022 John M. Chambers Statistical Software Award