Although machine learning was originally a subfield of Artificial Intelligence, its efficiency in carrying out inductive inference is now widely recognized, and has, as a result, attracted much attention in various fields related to informatics.
Our laboratory currently covers the following research areas:
Machine Learning from Discrete Data
We investigate theoretical foundation and realization of methods with which computers can learn or discover rules from collections of discrete data, e.g. relational data, (semi-)structured data, text data. On the theoretical side of our research, we clarify natures of such discrete data in order to apply them to machine learning and knowledge discovery. We also investigate methods to discretize continuous data to Inductive Inference. Moreover, we analyze the computational complexity of these methods, and, implement them as systems to use them in realistic settings.
Interplay Between Statistical learning Approaches and Modern Optimization Techniques
Our laboratory carries out research at the interface between the study of relevant probabilistic models, their ability to describe complex phenomena (time-series, graphs, strings) and their practical deployment in a large scale setting using modern optimization techniques (stochastic gradient, semidefinite programming) on parallel platforms (GPGPU parallelization).
Grammatical Inference
Based on the observation that the basis theorem for polynomial ring ideals can provide a new approach to envision machine learning tasks, we investigate the relationships between machine learning and algebraic structures.