Oct 2023 - Sep 2024

Mihoko Nojiri - Multi-scale cross-attention transformer encoder for event classification

Europe/London
Description

We deploy an advanced Machine Learning (ML) environment, leveraging a multi-scale cross-attention encoder for event classification, taking gg→H→hh→bbbb process at the High Luminosity Large Hadron Collider (HL-LHC) as an example. In the boosted Higgs regime, the final state consists of two fat jets. Our multi-modal network can extract information from the jet substructure and the kinematics of the final state particles through self-attention transformer layers. The learned information is subsequently integrated to improve classification performance using an additional transformer encoder with cross-attention heads. We demonstrate that our approach surpasses in performance current alternative ML methods, whether solely based on kinematic analysis or else on a combination of this with 

mainstream ML approaches. Then, we employ various interpretive methods to evaluate the network results, including attention map analysis and visual representation of Gradient-weighted Class Activation Mapping (Grad-CAM). The proposed network is generic and can be applied to analyse any process carrying information at different scales.