Spring 2012  Abstracts and Detailed Information

 Ichiro Takeuchi (Nagoya Institute of Technology), Parametric Optimization in Machine Learning

 Oliver Friedmann (Institut für Informatik), Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs
 Mikkel Thorup (University of Copenhagen), Combinatorial coloring of 3colorable graphs

 Bruce Shepherd (McGill University), On combinatorial optimization problems with side constraints
 May 7th: The lecture is postponed until May 14th.

 Andris Ambainis (University of Latvia), Variable time amplitude amplification and quantum algorithms for linear algebra problems
 Shengyu Zhang (Chinese University of Hong Kong), On the Complexity of Trial and Error

 John Watrous (Univ. of Waterloo), Quantum computing, interactive proofs, and QIP = PSPACE

 Peter Surový (the Institute of Statistical Mathematics), Examples of Applied Informatics in Forest Management and Modeling

 Yoshinobu Kawahara (Osaka University), Learning with discrete convexity

 Matthew DeBrecht (CiNet, NICT), Machine learning applications to Brain Machine Interfaces

 Francois Le Gall (University of Tokyo), Testing Efficiently Algebraic Properties

 ShangHong Lai (National Tsing Hua University), Statistical Modeling of 3D Facial Expression Deformation and Its Applications

 YiShin Chen (National Tsing Hua University), HOMME: HierarchicalOntological Mind Map Explorer

 Francois Le Gall (University of Tokyo), Faster Algorithms for Rectangular Matrix Multiplication

 Kevin Duh (NAIST), Learning with multiple objectives
 July 23rd: The lecture is not held because the lecture room is not available.
April 9th
16:30  18:00, Room 213, Faculty of Engineering Integrated Research Bldg, Main Campus
Ichiro Takeuchi, Nagoya Institute of Technology
Title: Parametric Optimization in Machine Learning
Abstract: Many machine learning algorithms are formulated as
optimization problems. These optimization problems are often
parametrized by one or more problem parameters (or hyper parameters)
such as regularization parameter. In practice, many learning models must
be trained by solving a number of optimization problems with different
problem parameters (e.g., model selection). The methods for studying the
relation between problem parameters and optimal solutions are called
parametric programming or sensitivity analysis. In some class of
optimization problems, parametric programming allows us to compute the
path of the optimal solutions for different values of problem
parameters. In this talk, I discuss how parametric programming can be
used in machine learning, and present some recent advances in the
literature.
April 16th
16:30  18:30, Room N1, Faculty of Engineering Building No.3
Oliver Friedmann, Institut für Informatik, LMU München
Title: Exponential Lower Bounds for Solving Infinitary Payoff Games and
Linear Programs
Abstract: Policy iteration is one of the most important algorithmic schemes
for solving infinitary payoff games such as parity games, stochastic games
and Markov decision processes, and many more. It is parameterized by an
improvement rule that determines how to proceed in the iteration from one
policy to the next. It is a major open problem whether there is an
improvement rule that results in a polynomial time algorithm for solving one
of the considered (twoplayer) game classes.
Simplex algorithms for solving linear programs are closely related to policy iteration algorithms. Like policy iteration, the simplex algorithm is parameterized by a socalled pivoting rule that describes how to proceed from one basic feasible solution in the linear program to the next. Also, it is a major open problem whether there is a pivoting rule that results in a (strongly) polynomial time algorithm for solving linear programs.
We describe our recent constructions for parity games that give rise to
exponential lower bounds for several improvement rules, and how to extend
the lower bound to more expressive game classes like stochastic games. We
show that our construction for parity games can be translated to Markov
decision processes, transferring our lower bound to their domain, and
finally show how the lower bounds for the MDPs can be transferred to the
linear programming domain, solving problems that have been open for several
decades.
Mikkel Thorup, University of Copenhagen
Title: Combinatorial coloring of 3colorable graphs
Abstract: We consider the problem of coloring a 3colorable graph in polynomial
time using as few colors as possible. We present a combinatorial
algorithm getting down to $\tilde O(n^{4/11})$ colors. (\tilde O
supresses logfactors). This is the first combinatorial improvement of
Blum's $\tilde O(n^{3/8})$ bound from FOCS'90. Like Blum's algorithm,
our new algorithm composes nicely with recent semidefinite
approaches. The current best bound is $O(n^{0.2072})$ colors by
Chlamtac from FOCS'07. We now bring it down to $O(n^{0.2038})$
colors. Joint work with KenIchi Kawarabayashi.
April 23rd
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Bruce Shepherd , McGill University
Title: On combinatorial optimization problems with side constraints
Abstract: We describe several network flow and network design probems where recent changes in technology required strengthening the notion of “feasibility”. In each case, we try to assess the impact or “cost” of adding the new constraint. Some models we consider include unsplittable and confluent flow, as well as robust network design.
May 14th
16:30  18:30, Room N1, Faculty of Engineering Building No.3
Andris Ambainis , University of Latvia
Title: Variable time amplitude amplification and quantum algorithms for linear algebra problems
Abstract: Quantum amplitude amplification is a method of increasing a success probability of an algorithm from a small \epsilon>0 to \Theta(1) with less repetitions than classically. We generalize quantum amplitude amplification to the case when parts of the algorithm that is being amplified stop at different times.
We then apply the new variable time amplitude amplification to give two new quantum algorithms for linear algebra problems. Our first algorithm is a new quantum algorithm for solving systems of linear equations which is almost quadratically faster than the previously known algorithm by Harrow et al. Our second algorithm tests whether a matrix A is singular or farfromsingular, faster then the previously known quantum algorithms.
Shengyu Zhang , Chinese University of Hong Kong
Title: On the Complexity of Trial and Error
Abstract: Motivated by certain applications from physics, biochemistry, economics, and computer science in which the objects under investigation are unknown or not directly accessible because of various limitations, we propose a trialanderror model to examine search problems with unknown inputs. Given a search problem with a hidden input, we are asked to find a valid solution. The way to find such a solution is to propose candidate solutions, i.e., trials, and, using observed violations, i.e., errors, to prepare future proposals. In accordance with our motivating applications, we consider a fairly broad class of constraint satisfaction problems, and assume that errors are signaled by a verification oracle in the format of the index of a violated constraint (with the exact content of the constraint still hidden). The objective is to design time and/or trialefficient algorithms that will find a valid solution or alternatively, to show that the problem is intrinsically hard.
Our discoveries can be summarized as follows. On one hand, despite the seemingly very little information provided by the verification oracle, efficient algorithms do exist for a number of important problems. For the Nash, Core, Stable Matching, and SAT problems, the unknowninput versions are as hard as the corresponding knowninput versions, up to a factor of polynomial. We further conduct a closer study of the latter two problems and give almost tight bounds on their trial complexities. On the other hand, there are problems whose complexities are substantially increased in the unknowninput model. For Graph Isomorphism and Group Isomorphism, in particular, although there are trialefficient algorithms, no timeefficient algorithms exist (under standard hardness assumptions).
Our model investigates the value of information, and our results demonstrate that the lack of input information can introduce various levels of extra difficulty. The model accommodates a wide range of combinatorial and algebraic structures such as order theory, group theory and strong ellipsoid method, and exhibits intimate connections with (and we hope can also serve as a useful supplement to) certain existing learning and complexity theories.
May 21st
16:30  18:00, Room N1, Faculty of Engineering Building No.3
John Watrous , Institute for Quantum Computing and School of Computer Science, University of Waterloo
Title: Quantum computing, interactive proofs, and QIP = PSPACE
Abstract:
The interactive proof system model of computation is a cornerstone of complexity theory, and its quantum computational variant has been studied in quantum complexity theory for over a decade. In this talk I will discuss an exact characterization of the power of quantum interactive proof systems that I proved a few years ago in collaboration with Rahul Jain, Zhengfeng Ji, and Sarvagya Upadhyay. The characterization states that the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable with an ordinary classical computer using a polynomial amount of memory (or QIP = PSPACE in complexitytheoretic terminology). This characterization implies the striking fact that quantum computing does not provide any increase in computational power over classical computing in the context of interactive proof systems.
I will not assume that the audience for this talk necessarily has familiarity with either quantum computing or complexity theory. With the spirit of the interactive proof system model in mind, any questions relating to the talk, or to complexity theory or quantum computation more generally, will be most welcome.
May 28th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Peter Surový , the Institute of Statistical Mathematics
Title: Examples of Applied Informatics in Forest Management and Modeling.
Abstract:
The complexity of the growth and production models in forestry makes indispensable the use of computational power and so incorporate knowledge from computer science and research. In this presentation I would like to show the possibility of linking the advances in computer research and the mathematical modeling and subsequently the forest management and decision making processes.
In second part of the presentation it will be explained a method for acquisition of 3dimensional data by magnetic digitizer and the option for structural model development with respect towards forestry management. Some basic examples of threedimensional modeling through FunctionalStructural Model Plant modeling environment (GroIMP) will be shown.
Third part will be dedicated to image analysis works and their practical use in forestry. Possibilities to measure forest productive and non productive functions (like aesthetic and visual perception) will be shown.
June 4th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Yoshinobu Kawahara, Osaka University
Title: Learning with discrete convexity
Abstract: Discrete convexity, such as submodularity, is relevant to machine learning mainly from two perspectives: (1)some problems may be formulated directly as the optimization of discrete convex functions and (2)an analogy with (continuous) convexity provides useful insights on the connection of supervised and unsupervised learning with combinatorial optimization. In this lecture, we first review the basics of discrete convexity, especially submodularity in a set function. Then, I present some recent advances of machine learning based on discrete convexity, showing concrete examples where discrete convexity is employed to construct prospective algorithms. Especially, we see some recent works on optimization and algorithms using discrete convexity for computer vision, sparse learning and clustering.
June 11th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Matthew DeBrecht, CiNet,NICT
Title: Machine learning applications to Brain Machine Interfaces
Abstract: A Brain Machine Interface (BMI) is a means of transmitting information directly between the brain and a computer. In a typical BMI setup, information is extracted from brain activity measurements by a trained machine learning algorithm to predict what task the subject is performing or intending to perform. A successful BMI has many important applications, such as giving persons with severe disabilities a new means to interact with their environment, as well as help healthy individuals improve performance on a task by means of neurofeedback training.
In this talk, we will give an overview of BMI technology, with a focus on noninvasive systems. After briefly explaining typical means of measuring brain activity, we will discuss machine learning methods to extract and classify relevant informati on from brain activity recordings. In particular, we will discuss “sparse” classification algorithms that automatically extract relevant features from highdimensional data sets, which helps prevent overfitting to the data. We will also describe recent research into providing taskspecific prior knowledge to the classifier, which is important when there are relatively few data samples available for training.
Although BMI technology has developed rapidly in recent years, there are still many challenging problems facing the field, and a large potential for developing and applying new machine learning ideas.
June 25th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Francois Le Gall, University of Tokyo
Title: Testing Efficiently Algebraic Properties
Abstract: Property testing studies the task of deciding whether an object given as an oracle has (or is close to having) some expected property. In the last two decades many properties, including algebraic properties or graph properties, have been shown to be efficiently testable. In this talk I will focus on testing algebraic properties, survey the state of the art of this area, and then describe some of my recent results on the complexity of testing whether a set is close to a given class of groups.
July 2nd
16:30  18:00, Room N1, Faculty of Engineering Building No.3
ShangHong Lai, National Tsing Hua University
Title: Statistical Modeling of 3D Facial Expression Deformation and Its Applications
Abstract: 3D human face modeling is a very popular topic with many applications, such as facial animation, face recognition and modelbased facial video communication. Previous imagebased 3D facial modeling researches mainly focused on recovering 3D geometry for faces with neutral expression. Most works on the recovery of 3D expression deformation rely on the tracking of facial markers. In this talk, I will present our research on the statistical modeling of the 3D facial expression deformation from a training dataset of
3D face scans with different expressions. To accomplish the 3D expression deformation learning, we develop an accurate 3D face surface registration algorithm. Then, I will show how to apply this statistical model to estimate the 3D face model and expression deformation from a single face image. In addition, we also apply it to generate 3D face caricature models from a single image. Some experimental results will be shown to demonstrate the proposed 3D expression deformation modeling approach.
July 9th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
YiShin Chen, National Tsing Hua University
Title: HOMME: HierarchicalOntological Mind Map Explorer
Abstract: In this paper we present HOMME (HierarchicalOntological Mind Map Explorer), a novel application expanding the way users explore the web. HOMME offers the user two powerful tools. Ontology builder creates a mind map around the input query, presenting the underlying semantic
relationships between the query and other terms on the web. Concept Finder provides the opportunity to experience the crossing of the boundaries of conceptual spaces. Our demonstration shows the ability of HOMME to convey significant knowledge to the user. Furthermore, future research on Web Information Integration will be possible by leveraging the power of the ontological knowledge provided by HOMME.
July 11th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Francois Le Gall, University of Tokyo
Title: Faster Algorithms for Rectangular Matrix Multiplication
Abstract: In this talk I will present a new algorithm for multiplying two rectangular matrices that is asymptotically faster than all known algorithms for rectangular matrix multiplication. I will also discuss how this can be used to improve the time complexity of several known algorithms that rely on rectangular matrix multiplication. For example, I will show how to obtain a O(n^{2.5302})time algorithm for the allpairs shortest paths problem over directed graphs with small integer weights, improving over the O(n^{2.575})time algorithm by Zwick, and also show how to improve the time complexity of sparse square matrix multiplication.
July 17th
16:30  18:00, Room N1, Faculty of Engineering Building No.3
Kevin Duh, NAIST
Title: Learning with multiple objectives
Abstract: Many practical learning problems involve multiple objectives. For example, a classifier deployed online is required to be both accurate and fast; a data analysis method should strike a balance between explanatory power and model complexity. These objectives (e.g. high accuracy, high speed, low complexity) are sometimes conflicting, so the designer of the learning algorithm is often forced to decide on a tradeoff among multiple objectives.
One conventional approach is to linearlycombine the objectives into a single objective using a priori defined weights. Here I will argue that an alternative approach, based on Pareto optimality concepts, is more desirable. Pareto optimality gives us a principled way to define what it means to be a good solution without fixing tradeoffs or weights. I will show how to extend some common machine learning algorithms to incorporate Pareto optimality. As an application, I will demonstrate multiobjective learning for machine translation, in order to build a translator that performs well with respective to various evaluation metrics (e.g. BLEU, TER) jointly. Finally, I hope to start a discussion about broader applications, such as search and natural language processing, that may benefit from this Paretobased framework.