City / Country: Porto, Portugal
Event dates: 16–17 July 2026
Submission deadline: 22 May 2026
Categories: Computer Science
Official website: KAN-ADIA 2026
Kolmogorov-Arnold Networks (KANs) decompose complex, multidimensional functions into structured compositions of univariate components. This approximation enables KANs to learn univariate edge functions during training, unlike traditional neural networks with fixed activation functions. The learnable edge functions allow KANs to adapt more effectively to complex data patterns. This adaptability increases KAN expressiveness and enables smaller KAN architectures to match or even outperform the performance of larger Multi-Layer Perceptron (MLPs). Since each edge function can be directly examined, KANs also improve interpretability. Although KANs demonstrate strong potential in terms of expressiveness, generalization, compactness, and interpretability, research on them is still ongoing.
Potential topics include, but are not limited to:
Time-efficiency: Enhance KANs to accelerate training, as they depend on iterative spline computations.
Scalability: Improve KANs to scale, as training on large, high-dimensional data is resource-intensive and unstable.
Generalization: Incorporate regularization into KANs will improve accuracy across diverse learning tasks.
Explainable AI: Develop concrete applications that show how KANs can advance XAI.
Implementation and Libraries: Develop open-source tools, benchmarks and guidelines will improve model design and evaluation.
Applications: Apply KANs and variants in complex environments where MPLs may struggle.