PhD Candidate Biomedical Engineering, Brown University providence, Rhode Island, United States
Disclosure(s):
Nazanin Ahmadi Daryakenari: No financial relationships to disclose
Background: Recent advances in physics-informed modeling have enabled the integration of mechanistic and data-driven approaches for system identification in pharmacometrics. Pioneering methods such as Compartment Model-Informed Neural Networks (CMINNs)[3] and AI-Aristotle[4] have demonstrated how hybrid frameworks can recover hidden biological dynamics under sparse and noisy data conditions. CMINNs integrate fractional and integer-order differential equations into PINNs to capture nonstandard pharmacokinetics, while AI-Aristotle combines symbolic regression with domain-decomposed PINNs for robust parameter estimation. Building on this foundation, cPIKANs (Chebyshev-based Physics-Informed Kolmogorov–Arnold Networks)[2] were introduced as structured alternatives to multilayer perceptrons in physics-informed frameworks. Our recent work[1], further analyzed training dynamics and optimization strategies for PINNs and tanh-cPIKANs in gray-box modeling tasks. In this study, we extend this line of research by proposing an enhanced variant— Scaled-cPIKANs—for inverse pharmacodynamic modeling.
Objectives: To solve a gray-box inverse problem in pharmacodynamics by recovering a latent chemotherapy efficacy function from tumor cell count data. We introduce Scaled-cPIKANs, a novel Physics-Informed Kolmogorov–Arnold Network architecture, and evaluate its effectiveness under sparse data conditions in comparison with traditional Physics-Informed Neural Networks (PINNs).
Methods: We modeled cancer cell dynamics under chemotherapy using a nonlinear ordinary differential equation, where the time-dependent efficacy function FD(t)F_D(t)FD(t) was treated as unknown. The goal was to reconstruct FD(t) from simulated tumor burden data sampled every 5 hours over a 600-hour window. We developed Scaled-cPIKANs, a Chebyshev-based PIKAN with additional tanh nonlinearities to improve gradient flow and numerical stability. Both tanh-cPIKAN and PINN models, matched for parameter count (~1700), were trained under first-order, second-order, and hybrid optimization strategies using single and double precision. Robustness to data sparsity and convergence characteristics were assessed.
Results: The proposed Scaled-cPIKAN architecture consistently outperformed both tanh-cPIKAN and PINNs across all training regimes. While tanh-cPIKAN achieved a mean absolute error (MAE) of 5.92×10⁻⁶ in recovering FD(t)FD(t) using a hybrid RAdam + BFGS optimizer in double precision—already outperforming the best PINN result (2.26×10⁻⁵)—Scaled-cPIKAN reached comparable accuracy with a lower polynomial degree and fewer trainable parameters, highlighting its efficiency and representational power.
Conclusions: This work introduces Scaled-cPIKANs as a robust and accurate framework for gray-box system identification in pharmacodynamics. Its ability to recover hidden biological functions like chemotherapy efficacy from limited tumor burden data highlights its potential for inverse PK/PD modeling in sparse and ill-posed settings. Comparative evaluations confirm its superiority over standard PINNs for these challenges.
Citations: [1] Daryakenari NA, Shukla K, Karniadakis GE. Representation Meets Optimization: Training PINNs and PIKANs for Gray-Box Discovery in Systems Pharmacology. arXiv preprint arXiv:2504.07379. 2025 Apr 10. [2] Toscano JD, Oommen V, Varghese AJ, Zou Z, Ahmadi Daryakenari N, Wu C, Karniadakis GE. From pinns to pikans: Recent advances in physics-informed machine learning. Machine Learning for Computational Science and Engineering. 2025 Jun;1(1):1-43. [3] Daryakenari NA, Wang S, Karniadakis G. CMINNs: Compartment model informed neural networks—Unlocking drug dynamics. Computers in Biology and Medicine. 2025 Jan 1;184:109392. [4] Ahmadi Daryakenari N, De Florio M, Shukla K, Karniadakis GE. AI-Aristotle: A physics-informed framework for systems biology gray-box identification. PLOS Computational Biology. 2024 Mar 12;20(3):e1011916.