Modernizing Amdahl's Law: How AI Scaling Laws Shape Computer Architecture

Authors: Chien-Ping Lu

Year: 2026

cs.DCcs.AIcs.AR

0
Citations
2026
Published
1
Authors

Abstract

Classical Amdahl's Law quantifies the limit of speedup under a fixed serial-parallel decomposition and homogeneous replication. Modern systems instead allocate constrained resources across heterogeneous hardware while the workload itself changes: some stages become effectively bounded, whereas others continue to absorb additional compute because more compute still creates value. This paper reformulates Amdahl's Law around that shift. We replace processor count with an allocation variable, replace the classical parallel fraction with a value-scalable fraction, and model specialization by a relative efficiency ratio between dedicated and programmable compute. The resulting objective yields a finite collapse threshold. For a specialized efficiency ratio R, there is a critical scalable fraction S_c = 1 - 1/R beyond which the optimal allocation to specialization becomes zero. Equivalently, for a given scalable fraction S, the minimum efficiency ratio required to justify specialization is R_c = 1/(1-S). Thus, as value-scalable workload grows, specialization faces a rising bar. The point is not that programmable hardware is always superior, but that specialization must keep re-earning its place against a moving programmable substrate. The model helps explain increasing GPU programmability, the migration of value-producing work toward learned late-stage computation, and why AI domain-specific accelerators do not simply displace the GPU.

Read PDF