Schedule for Fall 2014 - Kyoto University Informatics Seminar

Schedule for Fall 2014

Official course name: Perspectives in Informatics 5, Graduate School of Informatics

Unless noted otherwise, all talks 14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus

    • Introduction to the seminar and research presentation by Marco Cuturi (Kyoto University)
    • Justin Solomon (Stanford University), Transportation Techniques for Geometric Data Processing, hosted by M. Cuturi
    • David Avis (Kyoto University and McGill University), Linear Programming, Integer Programming and the P vs NP Question
    • Mathieu Blondel (NTT Communication Science Laboratories), Online Passive-Aggressive Algorithms for Non-Negative Matrix Factorization and Completion, hosted by M. Cuturi
    • Toru Tamaki (Hiroshima University), Computer vision + Computer graphics = Tomography, hosted by M. Cuturi
  • November 10 Irregular Slot, Talk on Monday 16:30 - 18:00
  • Pavel Klavik (Charles University), A New Approach to Sharing Your Understanding, hosted by K. Iwama
  • November 17 Irregular Slot, Talk on Monday 16:30 - 18:00
    • Aaditya Ramdas (Carneggie Mellon University), On the High-dimensonal Power of a Nonparametric Two Sample Test in High Dimensions, hosted by M. Cuturi
    • Robert Kowalski (Imperial College), Towards a Science of Computing, hosted by A. Yamamoto
  • December 1 Irregular Slot, Talk on Monday 16:30 - 18:00
  • Aleksandar Shurbevski (Kyoto University), A Simple Routing Problem Turned Complex, and an Approximation Framework in Response, hosted by D. Avis
  • Chen Change Loy (The Chinese University of Hong Kong), Learning a Deep Convolutional Network for Image Super-Resolution , hosted by Xuefeng Liang
    • Rajeev Raman(University of Leicester), Succinct Data Structures and Applications , hosted by A. Yamamoto
  • December 11 Irregular Slot, Talk on Thursday 16:30 - 18:00, Joho 2
    • Adnan Darwiche(UCLA), The Same-Decision Probability: Theory and Applications, hosted by A. Yamamoto
    • Hiroshi Imai(University of Tokyo), Matchings, Transversals and Their Extensions to Polymatroids Related to Graphs, hosted by D. Avis

Detailed Information and Abstracts

October 2

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Marco Cuturi, Kyoto University
Title: Introduction to the Seminar and Research Presentation
Abstract: We will first describe in the first session of the seminar an introduction to the grading system of this course. We will follow up with a presentations of current research carried out by M. Cuturi of the Graduate School of Informatics.

October 9

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Justin Solomon, Stanford University
Title: Transportation Techniques for Geometric Data Processing
Abstract: Many methods dealing with data on geometric domains suffer from noise, nonconvexity, and other challenges because they are forced to make choices among nearly-indistinguishable possibilities. In this talk, I will present techniques for explicitly acknowledging these and other ambiguities within shape and data processing pipelines. Rather than making arbitrary tie-breaking decisions, these methods maintain distributions over potential outcomes. This “soft” probabilistic framework is supported by the use of optimal transportation distances, which extend notions from classical geometry to the probabilistic domain. In addition to introducing the relevant theory, I will show how it can be used to derive practical algorithms for network analysis, surface mapping, semi-supervised learning, and other tasks.

Selected relevant papers:

• Solomon, Justin, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. “Earth Mover’s Distances on Discrete Surfaces.” SIGGRAPH 2014, Vancouver.

• Solomon, Justin, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. “Wasserstein Propagation for Semi-Supervised Learning.” ICML 2014, Beijing.

• Solomon, Justin, Leonidas Guibas, and Adrian Butscher. “Dirichlet Energy for Analysis and Synthesis of Soft Maps.” SGP 2013, Genoa.

October 16

No Lecture

October 23

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
David Avis, Kyoto University and McGill University
Title: Linear Programming, Integer Programming and the P vs NP Question
Abstract: Linear programming (LP) is by far the most important optimization problem to have been efficiently solved both in theory and in practice. It lies at heart of integer programming (IP) software used on a daily basis to solve a wide variety of applied problems, including NP-hard problems such as vehicle routing, scheduling and the like. Unless P=NP there is no polynomial time algorithm for IP. However LP is solvable in polynomial time and it is the basis of a useful model to study the question of whether P is equal to NP or not. We introduce this model, called extension complexity, and present some recent results in this area related to problems in P and also to NP-hard problems.

October 30

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Mathieu Blondel, NTT Communication Science Laboratories
Title: Online Passive-Aggressive Algorithms for Non-Negative Matrix Factorization and Completion
Abstract: Stochastic Gradient Descent (SGD) is a popular online algorithm for large-scale matrix factorization. However, SGD can often be difficult to use for practitioners, because its performance is very sensitive to the choice of the learning rate parameter. In this talk, I will present non-negative passive- aggressive (NN-PA), a family of online algorithms for non-negative matrix factorization (NMF). The algorithms are scalable, easy to implement and do not require the tedious tuning of a learning rate parameter. I will demonstrate the effectiveness of the algorithms on three large-scale matrix completion problems.

November 6

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Toru Tamaki, Hiroshima University
Title: Computer vision + Computer graphics = Tomography
Abstract: Tomography, an inverse problem to see inside by emitting rays and observing its output, is an important issue in physics, medical imaging, computer vision and related research fields. In this talk, I will present a computer vision-approach to optical scattering tomography, that is, using visible or infrared light and estimating the structure inside participating media in which light severely scatters. Our starting point is volumetric rendering techniques developed in the computer graphics community. Therefore I first review fundamental techniques such as reflection, BRDF, and volume rendering equation, then introduce the path integral to formulate an inverse problem to be solved by an interior point method. I will show some simulation results and conclude with future directions.

November 10

IRREGULAR SLOT, 16:30 -18:00, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Pavel Klavik, Charles University
Title: A New Approach to Sharing Your Understanding
Abstract: One of the fundamental problems in communication is how to share your ideas and how to understand ideas of other people. There are two problems with writing ideas down: 1) they have to be given linear structure, because books are read from the beginning till the end, 2) people reading the text will interpret it according to their life experiences, so the original meaning is not translated.

We propose a different approach to exchanging ideas by building mind models, which is very easy using computers. I will try to illustrate advantages of this approach, and its implications on communication and teaching. I will also try to explain why the current approach to teaching does not work well.

This is a collaboration with Zdenek Hedrlin and students from his seminars.

November 20

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Robert Kowalski, Imperial College
Title: Towards a Science of Computing
Abstract: As a scientific discipline, the field of Computing today lacks a unifying framework. It consists, instead, of diverse languages, tools and techniques in the mostly disjoint areas of programming, databases, and artificial intelligence.

November 17

IRREGULAR SLOT, 16:30 -18:00, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Aaditya Ramdas, Carnegie Mellon University
Title: On the High-dimensonal Power of a Nonparametric Two Sample Test in High Dimensions
Abstract: Nonparametric two sample testing deals with the question of consistently deciding if two distributions are different, given samples from both, without making any parametric assumptions about the form of the distributions. The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (general alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (mean-shift alternatives).

In this talk, I will show how to explicitly characterize the power of a popular nonparametric two sample test, designed for general alternatives, under a mean-shift alternative in the high-dimensional setting. Specifically, we explicitly derive the power of the linear-time Maximum Mean Discrepancy statistic using the Gaussian kernel, where the dimension and sample size can both tend to infinity at any rate, and the two distributions differ in their means. As a corollary, we find that if the signal-to-noise ratio is held constant, then the test's power goes to one if the number of samples increases faster than the dimension increases. This is the first explicit power derivation for a general nonparametric test in the high-dimensional setting, and also the first analysis of how tests designed for general alternatives perform when faced with easier ones. But, despite this diversity, it is possible to identify a number of similar features lying beneath the surface. These features include such notions as states and state transitions, declarative and procedural representations, external events and internally generated actions, active versus goal-oriented behaviour, and hierarchical organisation of structures and procedures. In my talk, I will highlight some of these similarities, in such different frameworks for Computing as logic programming, production systems, agent-oriented programming, active databases, action languages in AI, abstract state machines and other models of computation. I will argue that it is possible to unify many of the most important features of these frameworks, and to combine them in a single logic-based framework that can be applied to all areas of Computing.

December 1

IRREGULAR SLOT, 16:30 -18:00, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Aleksandar Shurbevski, Kyoto University
Title: A Simple Routing Problem Turned Complex, and an Approximation Framework in Response
Abstract: We approach a routing problem inspired by an application of a robotic arm in a production process. The robotic arm serves as a material handling device, and to optimize its routing, we propose a combinatorial optimization model, which is the Traveling Salesman on bipartite graphs (BTSP). The metric version of the BTSP admits a few simple approximation algorithms. However, extending the model slightly, to include biased asymmetry, proves much more challenging. Seeking to improve the approximation ratios of approximation algorithms for the biased case, we obtain a new way of heuristically solving the BTSP. Another line of problem generalization includes determining a sequence of material handling tasks for the robotic arm on top of the routing issue for each individual task. Thus we end up with a complex sequencing/routing problem, where two NP-hard combinatorial optimization problems influence each other and cannot be optimized independently. Relying on approximation approaches for the BTSP, we achieve a decoupling of these problems, and finally obtain an approximation framework for the complex sequencing/routing problem.

December 4

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Chen Change Loy, The Chinese University of Hong Kong
Title: Learning a Deep Convolutional Network for Image Super-Resolution
Abstract: In this talk, I am going to introduce a deep learning method for single image super-resolution (SR). The method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, the method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.

December 11

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Rajeev Raman, University of Leicester
Title: Succinct Data Structures and Applications
Abstract: A data structure is a combination of an abstract view of some data, together with an “API” that specifies operations on them. Provided the functionality of the API is supported, the data structure designer is free to devise ways to give additional non-functional requirements, such as low CPU usage, memory usage, few accesses to disk, privacy, resilience to errors etc. This talk focusses on Succinct Data Structures (SDS), which store the given data in an information-theoretic minimum amount of space and support operations rapidly. Most research is on static SDS, which take the given data, and preprocess them to answer queries, but do not support updates (although there is an increasing amount of work on dynamic SDS that support update operations). The information-theoretic minimum space for storing many kinds of data is significantly smaller than the “normal” way of storing such data. Asymptotically, space savings by a factor of \Omega(log n) are possible, and in practice, the space savings are usually 1-2 orders of magnitude. Operations are usually supported in O(1) time on the RAM model of computation. In practice, succinct data structures are usually competitive with “normal” data structures even when both of them fit into main memory. Of course, when the data is too large for the “normal” data structure to fit into memory, the SDS is much, much faster. This talk will introduce SDS and some libraries. SDS allow “big” data to be processed while using existing algorithms (and code) and are therefore finding many uses these days. If time permits I will cover some such applications, particularly in data mining and information retrieval.

Slides for this lecture is available at KULASIS.

December 11

IRREGULAR SLOT, 16:30 -18:00, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Adnan Darwiche, UCLA
Title: The Same-Decision Probability: Theory and Applications
Abstract: The Same-Decision Probability (SDP) is a newly introduced notion for quantifying the stability of decisions in probabilistic graphical models. Intuitively, the SDP is the probability that a current decision would stay the same, had we observed further information. The SDP is difficult to compute, being PP^PP-complete for Bayesian networks. In this talk, I will introduce the SDP and an algorithm for computing it, in addition to a number of its applications. The presented algorithm is particularly important, as it can be used to compute other PP^PP-complete problems (such as expectations), and is the first exact, practical algorithm we are aware of for this complexity class. As for applications, I will show how the SDP can be used to define a new notion of value of information (VOI), which can be used in the context of budget-limited classification, and liable decision making. I will also report on an ongoing application to medical informatics, where the SDP is being pursued in the context of treating stoke patients.

December 18

14:45 -16:15, Joho 2, Research Bldg. No.7 (総合7), Main Campus
Hiroshi Imai, University of Tokyo
Title: Transversals and Their Extensions to Polymatroids Related to Graphs
Abstract: Linear optimization over the base polytope of a certain matroid, say graphic, transversal and their duals, is a well-known polynomial-time solvable problem. From the viewpoint of extension complexity, one might be inclined to conjecture that the extension complexity of the base polytope is polynomial, but Rothvoss (2012) disproved this conjecture by a counting argument. In this talk we try to generalize a class of matroids whose extension complexity is polynomial. Specifically, we show that the extension complexity of a lower-truncated transversal polymatroid, studied by the speaker in 1982, is polynomial. This class includes sparsity matroids whose extension complexity is shown to be polynomial by Iwata, Kamiyama, Katoh, Kijima and Okamoto (2013), with the same bound for a small parameter and a slightly better bound for a larger parameter. The talk is planned for general audience in computer science.