Skip to main content

Randomized Algorithms, Data Structures & Signal Processing Perspectives of AI

This chapter surveys the deep connections between classical randomized algorithms, probabilistic data structures, signal processing theory, and modern deep learning. We cover random projections and dimensionality reduction (Johnson-Lindenstrauss lemma, random Fourier features, structured projections), randomized regularization and training techniques (dropout, stochastic depth, random augmentation, initialization theory), hashing for machine learning (locality-sensitive hashing, SLIDE, learning-to-hash), sketching and streaming algorithms (Count-Min Sketch, gradient compression, streaming PCA), Bloom filters and learned index structures, Fourier and spectral perspectives on neural networks (spectral bias, FNet, Adaptive Fourier Neural Operators), wavelet and multi-resolution analysis (scattering networks, WaveMix), the signal processing view of Transformers (attention as adaptive filtering, positional encoding as frequency representation, randomized attention approximations including Performer and Reformer), randomized linear algebra (randomized SVD, Nystrom approximation, CUR decomposition), and probabilistic data structures for AI systems (HyperLogLog, reservoir sampling, consistent hashing). Throughout, we emphasize both the theoretical guarantees that make these methods reliable and their practical impact on the efficiency and scalability of modern AI systems.