animal images
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 52)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 49 (4) ◽  
pp. 120-126
Author(s):  
T. Y. Sem

This article describes the zoomorphic complex of Tungus-Manchu beliefs refl ected in mythology, ritual practices, shamanism, and decorative and applied arts. Those beliefs are regarded as a coherent whole within the cultural system. The typology of the zoomorphic complex shows that the key fi gures were the serpent-dragon, the deer, the bear, and the tiger. In traditional worldviews and rituals, they were related to cosmogony, ancestor cult, hunting and fi shing rituals, healing, and initiation shamanic complexes. The semantics of animal images depended on their place in the cultural system, religious ritual, and artistic communication. Comparative analysis demonstrates both ethno-cultural specifi city and universal archetypal characteristics, as well as connection with ancient regional beliefs. The Tungus- Manchu zoomorphic complex originated within the East Asian traditions, having been infl uenced by cultures such as the Old Chinese, Korean, and Jurchen.


2021 ◽  
Vol 6 (9 (114)) ◽  
pp. 64-74
Author(s):  
Oleksandr Bezsonov ◽  
Oleh Lebediev ◽  
Valentyn Lebediev ◽  
Yuriy Megel ◽  
Dmytro Prochukhan ◽  
...  

A method of measuring cattle parameters using neural network methods of image processing was proposed. To this end, several neural network models were used: a convolutional artificial neural network and a multilayer perceptron. The first is used to recognize a cow in a photograph and identify its breed followed by determining its body dimensions using the stereopsis method. The perceptron was used to estimate the cow's weight based on its breed and size information. Mask RCNN (Mask Regions with CNNs) convolutional network was chosen as an artificial neural network. To clarify information on the physical parameters of animals, a 3D camera (Intel RealSense D435i) was used. Images of cows taken from different angles were used to determine the parameters of their bodies using the photogrammetric method. The cow body dimensions were determined by analyzing animal images taken with synchronized cameras from different angles. First, a cow was identified in the photograph and its breed was determined using the Mask RCNN convolutional neural network. Next, the animal parameters were determined using the stereopsis method. The resulting breed and size data were fed to a predictive model to determine the estimated weight of the animal. When modeling, Ayrshire, Holstein, Jersey, Krasnaya Stepnaya breeds were considered as cow breeds to be recognized. The use of a pre-trained network with its subsequent training applying the SGD algorithm and Nvidia GeForce 2080 video card has made it possible to significantly speed up the learning process compared to training in a CPU. The results obtained confirm the effectiveness of the proposed method in solving practical problems.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012001
Author(s):  
Peiyi Zeng

Abstract Animal image classification with CNN (convolutional neural network) is commonly investigated in aera of image recogniation and classification, but major studies focus on species pictures classification with obvious distinctions. For example, CNN is usually employed to distinghish images between dogs and cats. This article puts the effort on similar animal images classification by applying simple 2D CNN via python. It focus on the binary classification for snub-nosed monkeys and normal monkeys. This distinguishment is hard to be done manually in a short time. For constructing complete convolutional neural network, some preparations are done in advance, such as the database construction and preprocess. The database is constructed by python crawler (downloading from google images), with 800 and 200 images for each class respectively as train data and test data. The pre-work includes image resizing, decoding and standardization. After that, the model is trained and then tested for verifying the model reliability. The training accuracy is 96.67% without any abnormality. On the basis of successful training, the test accuracy almost coincides with train accuracy in each 50 generations and plots in a graph. It indicates similar trends and results for them in the whole process. Because of this, CNN model in the study can help people identify rare animals in time and then people can effectively protect them. Therefore, CNN will be helpful in field of animal conservation, especially for rare species.


2021 ◽  
pp. 1-22
Author(s):  
Dennis De Vriese ◽  
Malaika Brengman ◽  
Frédéric Leroy ◽  
Wouter Ryckbosch

2021 ◽  
Vol 12 ◽  
Author(s):  
Zewen Wang ◽  
Jiayi Li ◽  
Jieting Wu ◽  
Hui Xu

There are rare studies on the combination of visual communication courses and image style transfer. Nevertheless, such a combination can make students understand the difference in perception brought by image styles more vividly. Therefore, a collaborative application is reported here combining visual communication courses and image style transfer. First, the visual communication courses are sorted out to obtain the relationship between them and image style transfer. Then, a style transfer method based on deep learning is designed, and a fast transfer network is introduced. Moreover, the image rendering is accelerated by separating training and execution. Besides, a fast style conversion network is constructed based on TensorFlow, and a style model is obtained after training. Finally, six types of images are selected from the Google Gallery for the conversion of image style, including landscape images, architectural images, character images, animal images, cartoon images, and hand-painted images. The style transfer method achieves excellent effects on the whole image besides the part hard to be rendered. Furthermore, the increase in iterations of the image style transfer network alleviates lack of image content and image style. The image style transfer method reported here can quickly transmit image style in less than 1 s and realize real-time image style transmission. Besides, this method effectively improves the stylization effect and image quality during the image style conversion. The proposed style transfer system can increase students’ understanding of different artistic styles in visual communication courses, thereby improving the learning efficiency of students.


2021 ◽  
Vol 21 ◽  
pp. 125-149
Author(s):  
YUCHEN ZHANG
Keyword(s):  
Mo Yan ◽  

2021 ◽  
Vol 5 (4) ◽  
pp. p14
Author(s):  
Yanhong Zeng

As an important part of children’s literature, nursery rhymes are the earliest literary styles that children are exposed to after they are born. They can reflect objective things, living customs and national culture. Through the comparison of animal images in Chinese and English classic nursery rhymes, this paper concludes that there are cultural differences in animal images in nursery rhymes. Some animal images have similar cultural connotations in Chinese culture and English culture, while some animal images have different cultural connotations.


2021 ◽  
Author(s):  
Alex Clarke ◽  
Jordan E Crivelli-Decker ◽  
Charan Ranganath

When making a turn at a familiar intersection, we know what items and landmarks will come into view. These perceptual expectations, or predictions, come from our knowledge of the context, however it is unclear how memory and perceptual systems interact to support the prediction and reactivation of sensory details in cortex. To address this, human participants learned the spatial layout of animals positioned in a cross maze. During fMRI, participants navigated between animals to reach a target, and in the process saw a predictable sequence of five animal images. Critically, to isolate activity patterns related to item predictions, rather than bottom-up inputs, one quarter of trials ended early, with a blank screen presented instead. Using multivariate pattern similarity analysis, we reveal that activity patterns in early visual cortex, posterior medial regions, and the posterior hippocampus showed greater similarity when seeing the same item compared to different items. Further, item effects in posterior hippocampus were specific to the sequence context. Critically, activity patterns associated with seeing an item in visual cortex and posterior medial cortex, were also related to activity patterns when an item was expected, but omitted, suggesting sequence predictions were reinstated in these regions. Finally, multivariate connectivity showed that patterns in the posterior hippocampus at one position in the sequence were related to patterns in early visual cortex and posterior medial cortex at a later position. Together, our results support the idea that hippocampal representations facilitate sensory processing by modulating visual cortical activity in anticipation of expected items.


Author(s):  
Deepthi K

Animals watching is a common hobby but to identify their species requires the assistance of Animal books. To provide Animal watchers a handy tool to admire the beauty of Animals, we developed a deep learning platform to assist users in recognizing species of Animals endemic to using app named the Imagenet of Animals (IoA). Animal images were learned by a convolutional neural network (CNN) to localize prominent features in the images. First, we established and generated a bounded region of interest to the shapes and colors of the object granularities and subsequently balanced the distribution of Animals species. Then, a skip connection method was used to linearly combine the outputs of the previous and current layers to improve feature extraction. Finally, we applied the SoftMax function to obtain a probability distribution of Animals features. The learned parameters of Animals features were used to identify pictures uploaded by mobile users. The proposed CNN model with skip connections achieved higher accuracy of 99.00 % compared with the 93.98% from a CNN and 89.00% from the SVM for the training images. As for the test dataset, the average sensitivity, specificity, and accuracy were 93.79%, 96.11%, and 95.37%, respectively.


Sign in / Sign up

Export Citation Format

Share Document