Neural importance sampling (NIS) opens exciting new avenues in precise and fast theory predictions and data analysis for the LHC. In this talk, I will present two applications of NIS. First, I will discuss how it can be used to boost the performance of the MadGraph event generator using the MadNIS method, and how phase-space mappings can be combined with a set of very small learnable elements, to improve the sampling efficiency while being physically interpretable. Then, I will show how NIS in combination with differentiable programming and parallelization on GPUs can lead to large numerical improvements and speed-ups in global SMEFT analyses at the LHC.