Scaling #

Some of the earliest evidence for PLM scaling behaving similar to LLM scaling came from ESM-1b. The following plot shows model performance as a function of (log10) number of parameters. As seen there, they also trained LSTM models at two different sizes as baselines.

ESM-1b perplexity vs. log10(# of parameters) by model type on a held-out subset of UniRef50. Image

RITA provided even stronger evidence for PLMs following scaling laws. The RITA paper trained auto-regressive models of different sizes and used their performance to derive power laws for scaling performance. It used this power law to pick 4

As LLM papers often do, the ESM-1b paper used perplexity (exponent of the entropy) to evaluate the impact of model size on performance. Much of perplexity’s value in NLP comes from its strong correlation with performance on a wide range of practical tasks. We don’t have nearly as strong an understanding of the relationship between PLM perplexity and downstream task performance but preliminary evidence points to a relationship.

Image
ESM-1v Figure 9: Performance of different sized models on downstream tasks. We see a clear relationship between model size and downstream task performance. Combined with the results from ESM-1b on the relationship between model size and ECE, this transitively points to a strong relationship between LM loss and downstream task performance.

ProtTrans furnished additional evidence for a scaling relationship, in this case between the number of data points seen during training and performance on a different set of downstream tasks. Even if we ignore the data points for the three T5 models because of the size confounder, the trend of seeing more data points improving performance on the 3-state secondary structure classification task remains strong.

ProtTrans Figure 7: $ \log_{10}(\textrm{Number of samples}) $ vs. performance on NEW364 secondary structure prediction. Even confounded by different model architectures, we see a high correlation between number of samples seen during training and downstream task performance. Image