Schedule for Spring 2015
Official course name: Perspectives in Informatics 4, Graduate School of Informatics
Unless noted otherwise, all talks Monday 16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus

 Nicolas Bonneel (Université de Lyon  CNRS), Temporally Consistent Video Processing, hosted by M. Cuturi

 Nicolas Papadakis (Institut de Mathématiques de Bordeaux  CNRS), Color transfer and image segmentation with optimal transport, hosted by M. Cuturi

 Michael Gastner (Institute for Technical Physics and Materials Science, Budapest), How to find communities in networks, hosted by M. Cuturi
 May 1 (Friday)
 Wolfgang Bein (University of Nevada, Las Vegas), The Smart Grid, Competitive Powerdown Mechanisms and Energy Networks, hosted by K. Iwama

 ChungShou Liao (National Tsing Hua University), Online Route Planning  the Canadian Traveller Problem Revisited, hosted by K. Iwama

 Magnús M. Halldórsson (Reykjavik University), The algorithmic study of wireless networking, hosted by K. Iwama

 Bruce Reed (NII, Kyoto), Random Models Of 21st Century Networks And Their Connectivity Structure, hosted by D. Avis

 Gabriel Peyré (Université Paris Dauphine), Entropic Approximation of Wasserstein Gradient Flow, hosted by M. Cuturi

 Ajay Asra (National University Singapore), Multilevel Sequential Monte Carlo Samplers, hosted by M. Cuturi
 July 1 (Wednesday)
 Kevin Kelly (Carnegie Mellon University), A Noncircular Justification of Ockham’s Razor in Theoretical Inference , hosted by Akihiro Yamamoto

 Kazuki Yoshizoe (University of Tokyo), Theory, practice, and parallelization of Monte Carlo Tree Search, hosted by D. Avis

 Sourav Bhowmick (Nanyang Technological University), Towards HCIaware Data Management: Bridging The Chasm Between HCI and Data Management, hosted by Adam Jatowt
 July 24 (Friday)
 David Peleg (Weizmann Institute), Elite and Periphery in Social Networks: An Axiomatic Approach , hosted by K. Iwama
Detailed Information and Abstracts
April 13
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Nicolas Bonneel, Université de Lyon  CNRS
Title: Temporally consistent video processing
Abstract: Image processing has seen wide developments, and many algorithms have now become practical, widespread and useful. However, applying these algorithms on each frame of a video often results in undesirable temporal consistency artifacts, such as flickering.
In this talk, I will focus on two algorithms, for which we designed temporally consistent solutions.
The first algorithm is aimed at solving the intrinsic decomposition problem. Given an input photograph, the goal of intrinsic decomposition is to separate the image I into a shading layer S and a reflectance layer R, such that I = S*R. The shading layer should contain all smooth variations caused by lighting effects, and the reflectance layer contains all the textures and colors from the objects being photographed. We designed a gradient domain algorithm, enforcing the sparsity of reflectance gradients and smoothness of the shading layer through an L^2 / L^p norm decomposition. We enforce temporal consistency via causal temporal constraints in a Poisson solver, to solve this problem on videos.
The second algorithm is a color transfer algorithm. Movies are often color graded by professional artists, spending hours on short sequences to finetune colors. We propose an algorithm which is able to transfer the color grade of these professional videos onto amateur sequences. For instance, we are able to reproduce the lookandfeel of Transformers or Amélie Poulain on amateur videos. Our method is based on the geometrical study of the Wasserstein space – the space used in optimal transport theory.
Theses algorithms have been published respectively at Siggraph Asia 2014 and Siggraph 2013.
April 20
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Nicolas Papadakis, Institut Universitaire de Bordeaux  CNRS
Title: Color transfer and image segmentation with optimal transport
Abstract:
In this talk, the discrete optimal transportation framework is considered to solve two different image processing problems.
The first part will focus on color transfer between images. Optimal transport is commonly used to solve this problem. However, a major drawback of this framework is the lack of regularity of the transport map and its robustness to outliers. In practice, as it has been partially addressed in previous works, Optimal transport requires some relaxation and regularization to achieve these desirable properties. With such methods, as one feature can be matched to several ones, important interpolations between different features arise. This is not an issue for comparison purposes, but it involves strong and unwanted smoothing for transfer applications. We thus introduce a new regularized method based on a nonconvex formulation that minimizes the transport dispersion by enforcing the onetoone matching of features. In our color transfer experiments, we show that the minimization of the transport dispersion combined with regularization reduces color artifacts and color mixing.
The second part will be dedicated to the histogrambased segmentation of a color image in two regions. In the considered framework, fixed exemplar histograms define a prior on the statistical features of the two regions in competition. We investigate the use of regularized transportbased cost functions as discrepancy measures between color histograms and consider a spatial regularization of the segmentation map with total variation. We finally rely on a primaldual algorithm to solve the obtained convex optimization problem. Experiments illustrate the robustness of the proposed method for the segmentation of natural color images.
Both methods are joint works with Julien Rabin from Université de Caen, GREYC, France.
April 27
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Michael Gastner, Institute for Technical Physics and Materials Science, Budapest
Title: How to find communities in networks
Abstract: Networks representing social interactions typically exhibit a high degree of clustering. Links tend to be between individuals in the same group rather than between different groups. In small networks it is sometimes possible to determine the groups by visual inspection, but even for networks with a few dozen individuals one often has to resort to automated community detection. Much research has been conducted in this area in the past two decades. I will first motivate the importance of community detection with examples from anthropology and biology. Then I will explain two different algorithmic approaches, one based on modularity maximization and another based on information compression.
May 1
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Wolfgang Bein, University of Nevada, Las Vegas
Title: The Smart Grid, Competitive Powerdown Mechanisms and Energy Networks
Abstract: The existing electricity grid has been remarkably successful
and reliable, which can be attributed to the accurate modeling of the
load pattern on the grid to predict the demand. But it is primarily an
open loop system designed to support only one way dependable flow of
electrical power supplied by conventional fossil, hydro and nuclear
power plants. New algorithmic ideas are needed for a radically
different grid, which manages variable and distributed supply and
demand from large battery storage systems, webenabled appliances or
electrical vehicles. The talk will describe how some of these issues
can be addressed in the framework of online competitive analysis.
Powerdown mechanisms are well known and are widely used to save energy; they can also be utilized to control the output of conventional power generation sources by throttling down or switching off when reliable data about future demands does not exist. For twostate onoff systems a novel decrease and reset algorithms is analyzed. The method automatically attunes itself to the frequency of requests, and gives better performance for real world inputs than standard algorithms. Work is under way to generalize results to systems with multiple states. Another model uses competitive energy networks, where it is assumed that supplies and demands are given as staged requests by an omniscient adversary and algorithms must respond online without knowledge of future requests.
Recent results were obtained in collaboration with researchers at the University of Luebeck and the University of ElectroCommunications, Tokyo. The work presented in this talk was funded by the National Science Foundation (IIA 1427584).
May 11
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
ChungShou Liao, National Tsing Hua University
Title: Online Route Planning  the Canadian Traveller Problem Revisited
Abstract: This study revisits the Canadian Traveller Problem (CTP), which finds
applications to dynamic navigation systems. Given a road network G=(V,E)
with a source s and a destination t in V, a traveller knows the entire
network in advance, and wishes to travel as quickly as possible from s to
t, but discovers online that some roads are blocked (e.g., by snow or
accidents) once reaching them. The objective is to derive an adaptive
strategy so that its competitive ratio, which compares the distance
traversed with that of the static shortest s,tpath in hindsight, is
minimized. This problem was initiated by Papadimitriou and Yannakakis in
1991. They proved that it is PSPACEcomplete to obtain an algorithm with a
bounded competitive ratio. Furthermore, if at most k roads can be blocked,
then the optimal competitive ratio for a deterministic online algorithm is
2k+1, while the only randomized result known is a lower bound of k+1.
In this study, we show for the first time that a polynomial time randomized algorithm can beat the best deterministic algorithms, surpassing the 2k+1 lower bound by an o(1) factor. Moreover, we prove the randomized algorithm achieving a better (1.7k +1)competitive ratio in pseudopolynomial time. This work has appeared in the proceedings of ICALP 2014.
May 18
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Magnus M. Halldorsson, RIMS, Kyoto University & ICETCS, School of Computer Science, Reykjavik University
Title: The algorithmic study of wireless networking
Wireless communication has had tremendous impact on our lives, and can be expected to continue to do so. We examine the study of wireless networks from an algorithmic perspective. That means that we want to give algorithms and reason about them in a rigorous way, proving results that hold in wide variety of technologies and environments.
Actual wireless environments have proven to be highly challenging, due to the vagaries of signal propagation. This avoidably leads to some tension between what can be formalized and what is observed. The core question boils down to the question of the “right” model(s).
We survey in this talk recent progress in algorithmic understanding of wireless networking, with focus on the relationship, tradeoffs and tension between graphbased models and the more involved SINRbased models.
June 15
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Bruce Reed, NII Kyoto
Title: Random Models Of 21st Century Networks And Their Connectivity Structure
Abstract: The traditional Erdos‐Renyi model of a random network is of little use in modeling the type of complex networks which modern researchers study. It postulates that each node has the same likelihood of being attached to every other node. However,
in, e.g. the web, certain authoritative pages will have many more links entering them. A 1995 paper of Molloy and Reed, cited over 1500 times, sets out some conditions guaranteeing the existence of a giant component in a network with a specified degree sequence. This work has attracted such a great deal of attention because it can be applied to random models of a wide range of complicated 21st century networks such as the web or biological networks operating at a sub‐molecular level.
A heuristic argument suggests that a giant component will exist provided the sum of
the squares of the degrees of the nodes of the network is at least twice the sum of the degrees. Molloy and Reed proved that this is indeed true subject to certain technical conditions. Many authors, have obtained related results by specifying different technical conditions, or by tying down the size of the giant component.
Since the interest in this result is its wide applicability, it is natural to try and prove it under as few assumptions as possible. Recently, Joos, Perarnau‐Llobet, Rautenbach, and Reed proved the result under essentially no conditions.
I will present, in an accessible way, a variety of complex networks and their random models to which the Molloy‐Reed result has been applied. I will then sketch the proof of our result and how it differs from the proof of the Molloy‐Reed result.
June 22
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Gabriel Peyré, Université Paris Dauphine
Title: Entropic Approximation of Wasserstein Gradient Flow
Abstract: In this talk I will detail a novel numerical scheme to approximate gradient flows for optimal transport (i.e. Wasserstein) metrics. These flows have proved useful to tackle theoretically and numerically nonlinear diffusion equations that model for instance porous media or crowd evolutions. A bottleneck of these approaches is the high computational load induced by the resolution of each step. Indeed, this corresponds to the resolution of a convex optimization problem involving a Wasserstein distance to the previous iterate. Following several recent works on the approximation of Wasserstein distances, I consider a discrete flow induced by an entropic regularization of the transportation coupling. This entropic regularization allows one to trade the initial Wasserstein fidelity term for a KullbackLeibler divergence, which is easier to deal with numerically. I show how KullbackLeibler first order proximal schemes, and in particular Dykstra’s algorithm, can be used to compute each step of the regularized flow. The resulting algorithm is both fast, parallelizable and versatile, because it only requires multiplications by the Gibbs kernel exp(c/gamma) where c is the ground cost and gamma>0 the regularization strength. On Euclidean domains discretized on an uniform grid, this corresponds to a linear filtering (for instance a Gaussian filtering when c is the squared Euclidean distance) which can be computed in nearly linear time. On more general domains, such as (possibly nonconvex) shapes or on manifolds discretized by a triangular mesh, following a recently proposed numerical scheme for optimal transport, this Gibbs kernel multiplication is approximated by a shorttime heat diffusion. I show numerical illustrations of this method to approximate crowd motions on complicated domains as well as nonlinear diffusions with spatiallyvarying coefficients.
June 29
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Ajay Asra, National University Singapore
Title: Multilevel Sequential Monte Carlo Samplers
Abstract: The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with stepsize level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (KAUST), Raul Tempone (KAUST) and Alex Beskos (UCL).
July 1
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Kevin Kelly, Carnegie Mellon University
Title: A Noncircular Justification of Ockham’s Razor in Theoretical Inference
It is hard to get anywhere in theoretical science without Ockham’s razor, which slices away coincidental explanations based on tuning of free parameters in ad hoc theories. But there might be coincidences, so how could a systematic bias against them help us to find the true causes? Convergence to the true theory in the largesample limit (i.e., pointwise consistency) is too weak to mandate a bias against ad hoc theories in the short run. Uniform bounds on chance of error (i.e., uniform consistency) are very nice, but they are too strong to be feasible for inductive learning of theories that transcend the data to an infinite degree. An intermediate notion of truthconduciveness is required. Based on ideas from computational learning theory, we argue that Ockham’s razor is necessary for optimally direct convergence to the truth, where directness is explicated in terms of avoidance of cycles of opinion: believing A, repudiating A, and then returning to A again. A related principle, which we call patience, is to never rule out a simplest hypothesis compatible with experience until input information does. That principle turns out to be necessary for avoiding unnecessary reversals of opinion: believing A and then repudiating A, when some convergent method can avoid doing so. Our approach is based on a general, topological perspective on learning and inductive inference, to which DeBrecht and Yamamoto have made elegant contributions. Time permitting, we will sketch how to apply the topological ideas to stochastic theory identification, with application to causal inference from nonexperimental statistical data, which relies heavily on Ockham’s razor.
This work is in collaboration with Konstantin Genin, Carnegie Mellon University and
Hanti Lin, University of California, Davis.
July 6
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Kazuki Yoshizoe, University of Tokyo
Title: Theory, practice, and parallelization of Monte Carlo Tree Search
Monte Carlo Tree Search (MCTS) is a tree search algorithm invented for the game of Go in 2006.It was a major breakthrough in computer Go research and also was applied to many other problems.MCTS is a probabilistic tree search based on a combination of random sampling and tree search. Unlike many other tree search algorithms, MCTS works without an evaluation function. I will introduce the background and the theory behind MCTS which is based on MultiArmed Bandit problem. In practice, theory is often ignored. I will describe several enhancements of MCTS which is used in practice. I will also show the characteristic of the algorithmby showing some of the examples MCTS applications. Parallelization is important for all algorithms. However, parallel tree search is a challenging problem. I will explain the difficulty of parallel tree search and explain how we are trying to solve the difficulty.
July 13
16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
Sourav Bhowmick, Nanyang Technological University
Title: Towards HCIaware Data Management: Bridging The Chasm Between HCI and Data Management
Visual query interface design and devising efficient query processing techniques are traditionally independent to each other for decades. This is primarily due to the fact that the two key enablers of these efforts, namely HCI and database management, have evolved into two disparate and vibrant scientific fields, rarely making any systematic effort to leverage techniques and principles from each other towards superior realization of these efforts. Specifically, DB researcher has traditionally focused on “underthehood” techniques such as indexing, query processing, and transactions. On the hand, the HCI community has focused on “outsidethehood” issues such as user task modeling, menu design models, human factors, etc. DB researchers have a tendency to shy away from outsidethehood challenges with HCI flavors whereas the HCI researchers are reluctant to look at underthehood challenges that may influence the way they build visual interfaces among others. We believe that this chasm between these two vibrant fields sometimes create obstacles in providing superlative visual querying and data management services to end users. On the one hand, as visual query interface construction process is traditionally dataunaware, it may fail to generate flexible, portable, and userfriendly query interface. On the other hand, traditionally query processing techniques are only invoked once a user has completed her visual query formulation as the former is completely decoupled from the latter.
In this talk, we question the traditional reluctance of the DB (resp. HCI) community to embark on seemingly nonDBish (resp. nonHCIish) grand challenges. We explore the vision of bridging the longstanding chasm between traditional data management and HCI (referred to as HCIaware data management) in the context of querying graphstructured data. Specifically, we will highlight our research on building an HCIaware visual graph data management framework called HumanDB that aims to encapsulate several novel and intriguing research challenges toward the grand goal of bridging this chasm. Realization of these challenges entails significant rethinking of several longstanding strategies for visual interface construction and data management. Last but not the least, through this talk we would like to encourage DB community to look across the fence and get more engaged with these challenges which are outside the traditionally narrow boundaries of data management research.
July 24
Friday, 16:30 18:00, Joho 2, Research Bldg. No.7 (総合７), Main Campus
David Peleg, Weizmann Institute
Title: Elite and Periphery in Social Networks: An Axiomatic Approach
Many societies exhibit a twotier structure composed of a social elite, namely, a relatively small but wellconnected and highly influential group of powerful individuals, and the rest of society (referred to hereafter as the periphery). The talk will concern understanding the structure, size and properties of the elite and the powers that shape it, using an axiombased model for the relationships between the elite and the periphery. (Joint work with Chen Avin, Zvi Lotker, YvonneAnne Pignolet, and Itzik Turkel)