Visual orientation seems to indicate the decline of oral communication, but oral communication has its own living space under the new media ecology. Research has found that in the digital media era, voice communication is manifested as a single-level feature that simulates current interaction and information communication. Although voice communication is a lie constructed by individuals, the interaction between the subject’s discourse and the actual field of interaction separate the emotional distance, but the situation is harmonious and inclusive. The following voice communication and new media technologies are still trustworthy. Aiming at multifactor evolutionary algorithm (MFEA), the most classical multifactor evolutionary algorithm in multitask computation, we theoretically analyze the inherent defects of MFEA in dealing with multitask optimization problems with different subfunction dimensions and propose an improved version of the multifactor evolutionary algorithm, called HD-MFEA. In HD-MFEA, we proposed heterodimensional selection crossover and adaptive elite replacement strategies, enabling HD-MFEA to better carry out gene migration in the heterodimensional multitask environment. At the same time, we propose a benchmark test problem of multitask optimization with different dimensions, and HD-MFEA is superior to MFEA and other improved algorithms in the test problem. Secondly, we extend the application scope of multitask evolutionary computation, and for the first time, the training problem of neural networks with different structures is equivalent to the multitask optimization problem with different dimensions. At the same time, according to the hierarchical characteristics of neural networks, a heterodimensional multifactor neural evolution algorithm HD-MFEA neuro-evolution is proposed to train multiple neural networks simultaneously. Through experiments on chaotic time series data sets, we find that HD-MFEA neuro-evolution algorithm is far superior to other evolutionary algorithms, and its convergence speed and accuracy are better than the gradient algorithm commonly used in neural network training.