Hubert Baniecki

University of WarsawWarsaw University of TechnologyMI².AI

h.baniecki (at) uw.edu.pl

I am a 3rd year PhD student in Computer Science at the University of Warsaw advised by Przemyslaw Biecek. In spring 2024, I was a visiting researcher at LMU Munich hosted by Bernd Bischl and Giuseppe Casalicchio. Prior, I completed a Master’s degree in Data Science at Warsaw University of Technology.

My main research interest is explainable machine learning, with a particular emphasis on the robustness of post-hoc explanations to adversarial attacks. I also care about human-model interaction with applications in medicine, and support the development of several open-source Python & R packages for building predictive models responsibly.

I received the 2022 John M. Chambers Statistical Software Award.


recent news [previous]

2024 Nov A paper Increasing phosphorus loss despite widespread concentration decline in US rivers is published in the Proceedings of the National Academy of Sciences.
2024 Nov A paper Interpretable machine learning for time-to-event prediction in medicine and healthcare is accepted for publication in the Artificial Intelligence in Medicine journal.
2024 Sep A paper shapiq: Shapley interactions for machine learning is accepted at NeurIPS 2024.
2024 Aug A paper Aggregated attributions for explanatory analysis of 3D segmentation models is accepted for publication at WACV 2025 in the 1st round of reviews (12% of valid submissions).
2024 Jul I will present a paper Efficient and accurate explanation estimation with distribution compression at the ICML 2024 Workshop on DMLR.

selected publications [full list]

  1. Efficient and accurate explanation estimation with distribution compression
    H. Baniecki, G. Casalicchio, B. Bischl, P. Biecek
    ICML 2024 Workshops
    Compress then explain: Sample-efficient estimation of feature attributions, importance, effects.
  1. On the robustness of global feature effect explanations
    H. Baniecki, G. Casalicchio, B. Bischl, P. Biecek
    ECML PKDD 2024
    Theoretical bounds for the robustness of feature effects to data and model perturbations.
  2. The grammar of interactive explanatory model analysis
    H. Baniecki, D. Parzych, P. Biecek
    Data Mining and Knowledge Discovery, 2023
    Interactive explanation of a model improves the performance of human decision making.
  3. Fooling partial dependence via data poisoning
    H. Baniecki, W. Kretowicz, P. Biecek
    ECML PKDD 2022
    Feature effect explanations can be manipulated in an adversarial manner.
  4. dalex: Responsible machine learning with interactive explainability and fairness in Python
    H. Baniecki, W. Kretowicz, P. Piatyszek, J. Wisniewski, P. Biecek
    Journal of Machine Learning Research, 2021
    2022 John M. Chambers Statistical Software Award