Large language models from OpenAI, released March 2023.
As far as I can tell from the paper, it’s a (weak, amnesiac) AGI: its performance on basically any short-lived intellectual task can match or exceed a typical well-educated teen. Astonishing.
Q. GPT-2 FLOPs?
A. $1.5 \times 10^21$ (source)
Q. GPT-3 FLOPs?
A. $3.1 \times 10^23$ (source)
Q. GPT-4 FLOPs?
A. $2.1 \times 10^25$ (source)
Q. A100 FLOPs?
A. $3.1 \times 10^14$ (312 TFLOPs; FP32)
Q. H100 FLOPs?
A. $9.9 \times 10^14$ (989 TFLOPs; FP32; SXM)
Q. M1 Max FLOPs?
A. $10^13$ (10.4 TFLOPs; FP32)
Q. Nvidia RTX 4090 FLOPs?
A. $8.3 \times 10^13$ (82.6 TFLOPS; FP32)