We examine working memory use and incrementality using acognitive model of grammatical encoding. Our model combinesan empirically validated framework, ACT-R, with a linguistictheory, Combinatory Categorial Grammar, to target thatphase of language production. By building the model with theSwitchboard corpus, it can attempt to realize a larger set ofsentences. With this methodology, different strategies may becompared according to the similarity of the model’s sentencesto the test sentences. In this way, the model can still be evaluatedby its fit to human data, without overfitting to individualexperiments. The results show that while having more workingmemory available improves performance, using less workingmemory during realization is correlated with a closer fit,even after controlling for sentence complexity. Further, sentencesrealized with a more incremental strategy are also moresimilar to the corpus sentences as measured by edit distance.As high incrementality is correlated with low working memoryusage, this study offers a possible mechanism by whichincrementality can be explained.