One way to better understand the structure in DNA is by learning to predict the sequence. Here, we train a model to predict the missing base at any given position, given its left and right flanking contexts. Our best-performing model is a neural network that obtains an accuracy close to 54% on the human genome, which is 2% points better than modelling the data using a Markov model. In likelihood-ratio tests, we show that the neural network is significantly better than any of the alternative models by a large margin. We report on where the accuracy is obtained, observing first that the performance appears to be uniform over the chromosomes. The models perform best in repetitive sequences, as expected, although they are far from random performance in the more difficult coding sections, the proportions being ~70:40%. Exploring further the sources of the accuracy, Fourier transforming the predictions reveals weak but clear periodic signals. In the human genome the characteristic periods hint at connections to nucleosome positioning. To understand this we find similar periodic signals in GC/AT content in the human genome, which to the best of our knowledge have not been reported before. On other large genomes similarly high accuracy is found, while lower predictive accuracy is observed on smaller genomes. Only in mouse did we see periodic signals in the same range as in human, though weaker and of different type. Interestingly, applying a model trained on the mouse genome to the human genome results in a performance far below that of the human model, except in the difficult coding regions. Despite the clear outcomes of the likelihood ratio tests, there is currently a limited superiority of the neural network methods over the Markov model. We expect, however, that there is great potential for better modelling DNA using different neural network architectures.