Graphical evolving transformation system machine
For years, scientists have challenged the machine intelligence problem. Learning classes of objects followed by the classification of objects into their classes is a common task in machine intelligence. For this task, two objects representation schemes are often used: a vector-based representation, and a graph-based representation. While the vector representation has sound mathematical background and optimization tools, it lacks the ability to encode relations between the patterns and their parts, thus lacking the complexity of human perception. On the other hand, the graph-based representation naturally captures the intrinsic structural properties, but available algorithms usually have exponential complexity. In this work, we build an inductive learning algorithm that relies on graph-based representation of objects and their classes, and test the framework on a competitive dataset of human actions in static images. The method incorporates three primary measures of class representation: likelihood probability, family resemblance typicality, and minimum description length. Empirical benchmarking shows that the method is robust to the noisy input, scales well to real-world datasets, and achieves comparable performance to current learning techniques. Moreover, our method has the advantage of intuitive representation regarding both patterns and class representation. While applied to a specific problem of human pose recognition, our framework, named graphical Evolving Transformation System (gETS), can have a wide range of applications and can be used in other machine learning tasks.