The University of York
Edwin R. Hancock holds a BSc degree in physics (1977), a PhD degree in high-energy physics (1981) and a D.Sc. degree (2008) from the University of
Durham. From 1981-1991 he worked as a researcher in the fields of high-energy nuclear physics and pattern recognition at the Rutherford-Appleton Laboratory (now the
Central Research Laboratory of the Research Councils). During this period, he also held adjunct teaching posts at the University of Surrey and the Open University.
In 1991, he moved to the University of York as a lecturer in the Department of Computer Science, where he has held a chair in Computer Vision since 1998. He leads a
group of some 25 faculty, research staff, and PhD students working in the areas of computer vision and pattern recognition. His main research interests are in the use of
optimization and probabilistic methods for high and intermediate level vision. He is also interested in the methodology of structural and statistical and pattern recognition.
He is currently working on graph matching, shape-from-X, image databases, and statistical learning theory. His work has found applications in areas such as radar terrain
analysis, seismic section analysis, remote sensing, and medical imaging. He has published about 135 journal papers and 500 refereed conference publications. He was awarded the
Pattern Recognition Society medal in 1991 and an outstanding paper award in 1997 by the journal Pattern Recognition. He has also received best paper prizes at CAIP 2001,
ACCV 2002, ICPR 2006 and BMVC 2007. In 2009 he was awarded a Royal Society Wolfson Research Merit Award. In 1998, he became a fellow of the International Association for
Pattern Recognition. He is also a fellow of the Institute of Physics, the Institute of Engineering and Technology, and the British Computer Society. He has been a member of
the editorial boards of the journals IEEE Transactions on Pattern Analysis and Machine Intelligence, Pattern Recognition, Computer Vision and Image Understanding, and Image
and Vision Computing. In 2006, he was appointed as the founding editor-in-chief of the IET Computer Vision Journal. He has been conference chair for BMVC 1994, Track Chair for
ICPR 2004 and Area Chair at ECCV 2006 and CVPR 2008, and in 1997 established the EMMCVPR workshop series.
Gauging Network Structure and Complexity
The study of graph and network complexity is an area of current topicality in several areas of computer science and also interdisciplinary fields such as complexity science. In this talk I will review several measures derived from the study of random walks on graphs, the zeta function of a graph and the von Neumann entropy These can be used as measures of graph structure and complexity for the purposes of classifying and clustering graphs. I will also describe measures derived from quantum walks on graphs and show how these reveal aspects of graph structure (such as symmetry) not revealed by the classical methods.
Karlsruhe Institute of Technology
Tamim Asfour received his diploma degree in Electrical Engineering (Dipl.-Ing.) in 1994 and his PhD in Computer Science (Dr.-Ing.) in 2003 from the
University of Karlsruhe. In 2003 he was awarded with the Research Center for Information Technology (FZI) price for his outstanding PhD thesis on sensorimotor control in
humanoid robotics and the development of the humanoid robot ARMAR. He is a senior research scientist and leader of the Humanoid Research Group at Humanoids and Intelligence
Systems Lab, Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT). His major research interest is humanoid robotics. In particular, his research topics include
action learning from human observation, goal-directed imitation learning, dexterous grasping and manipulation, active vision and active touch, whole-body motion planning,
cognitive control architectures, system integration, robot software and hardware control architecture, motor control and mechatronics. He is leading the system integration tasks
and the development team of the humanoid robot series ARMAR in the German Humanoid Robotics Project (SFB 588) funded by the German Research Foundation (DFG). He is currently
involved in the following projects funded by the European Commission: PACO-PLUS, GRASP and Xperience. Tamim Asfour is member of the Editorial Board of IEEE Transactions on
Robotics and European Chair of the IEEE-RAS Technical Committee on Humanoid Robots. He is member the Executive Board of the German Association of Robotics (DGR). He serves
as member on several program committees and review panels. Since September 2010 he holds an Adjunct Professor position at the Georgia Institute of Technology (Georgia Tech),
College of Computing, Interactive Computing.
Active perception for grasping and imitation strategies on humanoid robots
Building humanoid robots able to learn to operate in the real world and to interact and communicate with humans, must model and reflectively reason about their perceptions and actions in order to learn, act, predict and react appropriately. Such capabilities can only be attained through physical interaction with and exploration of the real world and requires the simultaneous consideration of perception and action. Representations built from such interactions are much better adapted to guiding behaviour than human crafted rules and allow situated and embodied systems, such as humanoid robots in human-centered environments, to gradually extend their cognitive horizon. To achieve this goal I am building humanoid robots with complex and rich sensorimotor capabilities as the most suitable experimental platform for studying cognitive information processing. In this talk, I will present recent progress towards building autonomous humanoid robots able to act, interact in and autonomously acquire knowledge in the real world. The talk will discuss current progresstowards the implementation of integrated humanoid robots able to 1) perform complex grasping and manipulation tasks in a kitchen environment 2) autonomously acquire object knowledge through active visual and haptic exploration and 3) learn actions from human observation and imitate them in goal-directed manner. The developed capabilities will be demonstrated on the humanoid robots ARMAR-IIIa and ARMAR-IIIb.
University of Ljubljana
Ales Leonardis is a full professor and the head of the Visual Cognitive Systems Laboratory with the Faculty of Computer and Information Science,
University of Ljubljana. He is also an adjunct professor at the Faculty of Computer Science, Graz University of Technology. From 1988 to 1991, he was a visiting researcher
in the General Robotics and Active Sensory Perception Laboratory at the University of Pennsylvania. From 1995 to 1997, he was a postdoctoral associate at the PRIP,
Vienna University of Technology. He was also a visiting researcher and a visiting professor at the Swiss Federal Institute of Technology ETH in Zurich and at the Technische
Fakultaet der Friedrich-Alexander-Universitaet in Erlangen, respectively. His research interests include robust and adaptive methods for computer vision, object and scene
recognition and categorization, statistical visual learning, 3D object modeling, and biologically motivated vision. He is an author or coauthor of more than 160 papers
published in journals and conferences and he coauthored the book Segmentation and Recovery of Superquadrics (Kluwer, 2000). He is an Editorial Board Member of Pattern
Recognition, an Editor of the Springer Book Series Computational Imaging and Vision, and an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine
Intelligence. He has served on the program committees of major computer vision and pattern recognition conferences. He was also a program co-chair of the European Conference
on Computer Vision, ECCV 2006. He has received several awards. In 2002, he coauthored a paper, 'Multiple Eigenspaces,' which won the 29th Annual Pattern Recognition Society
award. In 2004, he was awarded a prestigious national Award for scientific achievements. He is a fellow of the IAPR and a member of the IEEE and the IEEE Computer Society.
Combining compositional shape hierarchy and multi-class object taxonomy for efficient object categorisation
Visual categorisation has been an area of intensive research in the vision community for several decades. Ultimately, the goal is to efficiently detect and recognize an increasing number of object classes. The problem entangles three highly interconnected issues: the internal object representation, which should compactly capture the visual variability of objects and generalize well over each class; a means for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image and scales favorably with the number of objects. In this talk I will present our novel approach which combines a learned compositional hierarchy, representing (2D) shapes of multiple object classes, and a coarse-to-fine matching scheme that exploits a taxonomy of objects to perform efficient object detection. Our framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions represent the whole shapes of the objects. The vocabulary is learned layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. However, in order for recognition systems to scale to a larger number of object categories, and achieve running times logarithmic in the number of classes, building visual class taxonomies becomes necessary. We propose an approach for speeding up recognition times of multi-class part-based object representations. The main idea is to construct a taxonomy of constellation models cascaded from coarse-to-fine resolution and use it in recognition with an efficient search strategy. The structure and the depth of the taxonomy is built automatically in a way that minimizes the number of expected computations during recognition by optimizing the cost-to-power ratio. The combination of the learned taxonomy with the compositional hierarchy of object shape achieves efficiency both with respect to the representation of the structure of objects and in terms of the number of modeled object classes. The experimental results show that the learned multi-class object representation achieves a detection performance comparable to the current state-of-the-art flat approaches with both faster inference and shorter training times.
Universidade Tecnica de Lisboa
Ruben Martinez-Cantin is an associate professor (profesor) at the new Centro Universitario de la Defensa, attached to the University of Zaragora. He is
also an associate researcher at the Instituto Superior Tecnico (IST), in Lisbon and also at the Instituto de Investigacion en Ingenieria de Aragon (I3A) in Zaragoza. Before
that, he was a postdoctoral researcher at the Insitute of Systems and Robotics at the Instituto Superior Tecnico in Lisbon, being still an active collaborator. He received his
PhD and MSc in Computer Science and Electrical Engineering from the University of Zaragoza in 2008 and 2003, respectively. His research interests include the application
Bayesian inference and reasoning methods in machine learning, robotics, computer vision and cognitive models.
Optimal actions for better understanding: the non-convex paradigm for learning
Active learning and sequential experimental design provides the common framework between statistical learning (understanding) and decision making (actions). Traditionally, both fields have been characterized for relying in convex functions, which allow to solve high dimensional problems efficiently. However, many problems in robotics and perception cannot be directly formulated in terms of convex functions without relying on heavy approximations such as discretization or taking suboptimal (local) decisions. In this work we show how some of those problems can be reformulated to apply non-convex (global) optimization methods efficiently.
Fernando Fernandez Rebollo
Universidad Carlos III de Madrid
Fernando Fernandez is a faculty of the Computer Science Department of Universidad Carlos III de Madrid, since october 2005. He received his PhD degree
in Computer Science from University Carlos III of Madrid (UC3M) in 2003. He received his BSc in 1999 from UC3M, also in Computer Science. Since 2001, he became assistant and
associate professor at UC3M. In the fall of 2000, Fernando was a visiting student at the Center for Engineering Science Advanced Research at Oak Ridge National Laboratory
(Tennessee). He was also a postdoctoral fellow at the Computer Science Department of Carnegie Mellon University since october 2004 until december 2005. He is the recipient of a
pre-doctoral FPU fellowship award from Spanish Ministry of Education (MEC), a Doctoral Prize from UC3M, and a MEC-Fulbright postdoctoral Fellowship. He has more than 30 jounal
and conference papers, mainly in the field of machine learning and planning. He is interested in intelligent systems that operate in continuous and stochastic domains. In his
PhD Thesis, he studied different discretization methods of the state space in Reinforcement Learning problems. When he arrived to CMU, he focused his research on the transfer
of policies between different Reinforcement Learning tasks. Currently, his research is also focused on connecting classical planning methods with learning mechanism that allow
to behave in dynamic and stochastic environments. Applications of his research include robot soccer, adaptive educational systems, tourism support tools and business
Architectures for the Integration of Automated Planning and Machine Learning in Autonomous Robots
Automated Planning (AP) permits to generate high level plans from a domain and problem description. The problem includes the initial state and goal of the system. The domain defines the operators that transform one state in another. Although many improvements have been achieved in this area, the application of automated planning in real systems, such an autonomous robot, is still a challenge, mainly due to three factors. The first one is the relationship between the high level descriptions of the world used by AP systems and the low level information managed by sensors and actuators of a robot. The second challenge is how to deal with the uncertainty of the world, and how this uncertainty can be modeled with AP. Last, adaptation to new situations requires the application of learning approaches which permits to continuously afford changes in the environment. In this talk, PELEA (Planning, Execution and LEarning Architecture) is described, showing how Machine Learning techniques, together with monitoring, execution and re-planning schemes, can be integrated with AP to solve previous handicaps. Some examples in the robotics area are provided during the talk, as well as some other learning and Human Robotic Interaction schemes oriented to the acquisition of knowledge and the improvement of the robot performance.
Universidad de Alicante
Francisco Escolano received his
Bachelors degree in Computer Science from the Polytechnical
University of Valencia (Spain) in 1992 and his Ph degree in
Computer Science from the University of Alicante in 1997.
Since 1998, he is an Associate Professor with the Department
of Computer Science and Artificial Intelligence of the
University of Alicante. He has been post-doctoral fellow
with Dr. Norberto M. Grzywacz at the Biomedical Engineering
Department of the University of South California in Los
Angeles, and he has also collaborated with Dr. Alan L.
Yuille at the Smith-Kettlewell Eye Research Institute of San
Francisco. Recently, he visited the Liisa Holm's
Bioinformatics Lab at the University of Helsinki. His
research interests are focused on the development of
efficient and reliable computer-vision algorithms for
biomedical applications (tracking of intravascular sequences),
active vision and robotics (mid-level geometric structures
obtained through junction grouping, stereo and appearance
based methods for the localization of mobile robots, SLAM),
and video-based surveillance (motion detection and object
tracking). He is also interested in the coupling between
computer and biological vision. He is the head of the Robot