pisco_log
banner

Intelligent Content Generation Mechanisms for College Basketball Training Driven by Generative AI

Jiajun Nan

Abstract


We propose an explainable ensemble framework for intelligent content generation in college basketball training, addressing the
critical need for both high-quality outputs and interpretable decision-making in generative AI systems. The framework integrates heterogeneous multi-modal models with dedicated interpretability mechanisms, combining a Vision Transformer for spatial analysis of game
footage and a GPT-4 architecture for textual content generation. A novel attention-based feature attribution module quantifies component
contributions, while a rule-guided rationale synthesizer incorporates basketball knowledge for human-understandable explanations. Parallel computation streams for generation and explanation enable iterative refinement via user feedback. Our approach uniquely optimizes
generation performance and interpretability through architectural innovations in model interaction and interpretability propagation. Experiments validate the frameworks ability to generate actionable training content with transparent rationales, bridging generative AI and
coaching applications. This work advances explainable AI by offering a scalable solution for domains requiring creative and accountable
automated content generation.

Keywords


Explainable ensemble framework; Intelligent content generation; College basketball training; Multi-modal models; Interpretability mechanisms

Full Text:

PDF

Included Database


References


[1] S Feuerriegel, J Hartmann, C Janiesch, et al. (2024) Generative ai. *Business & Information Systems Engineering*.

[2] F Fui-Hoon Nah, R Zheng, J Cai, K Siau, et al. (2023) Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. *Journal of Information Technology Management*.

[3] H Zohny, J McMillan & M King (2023) Ethics of generative AI. *Journal of Medical Ethics*.

[4] TG Dietterich (2000) Ensemble methods in machine learning. *International Workshop on Multiple Classifier Systems*.

[5] A Vaswani, N Shazeer, N Parmar, et al. (2017) Attention is all you need. *Advances in Neural Information Processing Systems*.

[6] I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, et al. (2020) Generative adversarial networks. *Communications of the ACM*.

[7] F Poursabzi-Sangdeh, DG Goldstein, et al. (2021) Manipulating and measuring model interpretability. *Proceedings of the 2021 CHI

Conference on Human Factors in Computing Systems*.

[8] X Wu, L Xiao, Y Sun, J Zhang, T Ma & L He (2022) A survey of human-in-the-loop for machine learning. *Future Generation Computer Systems*.

[9] S Khan, M Naseer, M Hayat, SW Zamir, et al. (2022) Transformers in vision: A survey. *ACM Computing Surveys*.

[10] MH Guo, TX Xu, JJ Liu, ZN Liu, PT Jiang, TJ Mu, et al. (2022) Attention mechanisms in computer vision: A survey. *Computational

Visual Media*.

[11] V Vishwarupe, PM Joshi, N Mathias, et al. (2022) Explainable AI and interpretable machine learning: A case study in perspective. *Procedia Computer Science*.

[12] Y Qiang, C Li, P Khanduri & D Zhu (2023) Interpretability-aware vision transformer. *arXiv preprint arXiv:2309.08035*.




DOI: http://dx.doi.org/10.70711/neet.v3i6.7121

Refbacks

  • There are currently no refbacks.