Univesité Pierre et Marie CURIE Colloquium d'informatiquehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012Séminaire de prestige en recherche informatique. Invitation de grands noms du domaine, visant un public large.http://vodcast.cpm.upmc.fr/images/Co12_ColloquiumLip6.jpgColloquium d'informatiquehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012What Makes Digital Inclusion Good Or Bad?http://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=26 There are many threats to freedom in the digital society. They include massive surveillance, censorship, digital handcuffs, nonfree software that controls users, and the War on Sharing. Computers for voting make election results untrustworthy. Other threats come from use of web services. Finally, we have no assured right to make any particular use of the Internet; every activity is precarious, permitted only as long as companies are willing to cooperate with our doing it. DOWNLOAD VIDEO: http://video.upmc.fr/richard_stallman.ogv Tue, 11 Oct 2016 00:00:00 +0200S_C_colloquium_lip6_2012_26Abstract interpretationhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=25 The complexity of large programs grows faster than the intellectual ability of programmers in charge of their development and maintenance. The direct consequence is a lot of errors and bugs in programs mostly debugged by their end-users. Programmers are not responsible for these bugs. They are not required to produce provably safe and secure programs. This is because professionals are only required to apply state of the art techniques, that is testing on finitely many cases. This state of the art is changing rapidly and so will irresponsibility, as in other manufacturing disciples. Scalable and cost-effective tools have appeared recently that can avoid bugs with possible dramatic consequences for example in transportation, banks, privacy of social networks, etc. Entirely automatic, they are able to capture all bugs involving the violation of software healthiness rules such as the use of operations with arguments for which they are undefined. These tools are formally founded on abstract interpretation. They are based on a definition of the semantics of programming languages specifying all possible executions of the programs of a language. Program properties of interest are abstractions of these semantics abstracting away all aspects of the semantics not relevant to a particular reasoning on programs. This yields proof methods. Full automation is more difficult because of undecidability: programs cannot always prove programs correct in finite time and memory. Further abstractions are therefore necessary for automation, which introduce imprecision. Bugs may be signalled that are impossible in any execution (but still none is forgotten). This has an economic cost, much less than testing. Moreover, the best static analysis tools are able to reduce these false alarms to almost zero. A time-consuming and error-prone task which is too difficult, if not impossible for programmers, without tools.Thu, 29 Sep 2016 00:00:00 +0200S_C_colloquium_lip6_2012_25"Chirps" everywherehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=24 Chirps — i.e., transient waveforms that are modulated in both amplitude and frequency—are ubiquitous in Nature. Birds are chirping, bats and whales too, and even coalescing black holes are, as it has been recently evidenced by the first direct detection of gravitational waves. Chirps can also be found in some mathematical functions (such as Riemann’s) and they are extensively used in engineering (active radar and sonar, non-destructive evaluation). Because of their highly nonstationary structure, chirps call for description tools that go beyond Fourier, with « time-frequency » as a natural paradigm for their analysis and processing. Basics of such approaches will be first recalled and complemented by recent advances aimed at adaptively disentangling multicomponent chirp signals into coherent constitutive waveforms. Two case studies will then be detailed, dealing namely with the bat biosonar system and the interferometric detection of gravitational waves. Mon, 04 Jul 2016 00:00:00 +0200S_C_colloquium_lip6_2012_24Robots that exceed human capabilitieshttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=23 The design of humanoids is driven by our admiration for humans’ motor and cognitive capabilities, which guides the design of similar competences in robots. While humanoid robots are far from reproducing the complexity of human cognition, for some skills robots do better than humans. Thanks to their powerful motors and the speed of computation of their computer-driven circuits, robots can beat humans in precision and reactivity. This talk will give an overview of several approaches developed at LASA to endow robots with the ability to react extremely rapidly in the face of unexpected changes in the environment. We use human demonstrations to guide the design of the controller’s parameters to modulate the compliance and to determine the range of feasible paths. A review of these algorithms will be accompanied with illustrations of their implementation for controlling uni-manual and bi-manual manipulation. I will conclude by showing some examples of super-human capabilities for catching objects with a dexterity that exceeds that of human beings. Tue, 19 Apr 2016 00:00:00 +0200S_C_colloquium_lip6_2012_23Really Big Data​ Analytics on Graphs with Trillions of Edgeshttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=22 Big graphs occur naturally in many applications, most obviously in social networks, but also in many other areas such as biology and forensics. Current approaches to processing large graphs use either supercomputers or very large clusters. In both cases the entire graph must reside in memory before it can be processed. We are pursuing an alternative approach, processing graphs from secondary storage. While this comes with a performance penalty, it makes analytics on very large graphs feasible on a small number of commodity machines. We have developed two systems, one for a single machine and one for a cluster of machines. X-Stream, the single machine solution, aims to make all secondary storage access sequential. It uses two techniques to achieve this goal, edge-centric processing and streaming partitions. Chaos, the cluster solution, starts from the observation that there is little benefit to locality when accessing data from secondary storage over a high-speed network. As a result, Chaos spreads graph data uniformly randomly over storage devices, and uses randomized access to achieve I/O balance. Chaos furthermore uses work stealing to achieve computational load balance. By using these techniques, it avoids the need for expensive partitioning during pre-processing, while still achieving good scaling behavior. With Chaos we have been able to process an 8-trillion-edge graph on 32 machines, a new milestone for graph size on a small cluster. I will describe both systems and their performance on a number of benchmarks and in comparison to state-of-the-art alternatives. This is joint work with Laurent Bindschaedler (EPFL), Jasmina Malicevic (EPFL) and Amitabha Roy (Intel Labs). Tue, 29 Mar 2016 00:00:00 +0200S_C_colloquium_lip6_2012_22Cybersecurity and network measurement : problematic in so many wayshttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=21 In this talk, I will discuss the challenges placed on us when the goal is to find anomalies in the Internet. The challenges are statistical, methodological and ethical.<br/> <br/> Firstly, today's Internet is vastly more complex (not just larger) than its original designers expected. The number of applications numbers in the millions. Systems increasingly involve machine-to-machine communication as well as human-to-human, or simple information retrieval. Thus the dimensionality of the system is massive, and the definition of an "anomaly" is itself non-trivial. Secondly, the scale requires us to limit measurement to sampling, and yet as we've just observed, we have dynamic, high dimensionality data. What to sample, where and when, is increasingly non-obvious. This leads to the third strand of the talk, which is that the most successful measurement methods we have found in the last decade also represent a fairly extensive invasion of privacy, and are subject both to ethical and to legal constraints.<br/> <br/> And yet the Internet is a critical resource upon which society, for social, economic and practical reasons depends. Just as other utilities (water, electricity) require careful protection, the Internet needs care and attention. The old Jon Postel principle "be conservative in what you send, and liberal in what you accept" is a fine guiding design saying, but unfortunately, there are many intentional and unintentional behaviours in today's net which are a long way from that ideal, and understanding (and possibly repairing them) seems to be ever more important.<br/> <br/> The talk is based on work in the EU Marie-Curie METRICS project, which is an international training network for PhDs that has been looking at requirements for network measurement and sharing of data, and held a summer school in Estonia in 2015 on Cybersecurity. Thu, 21 Jan 2016 00:00:00 +0100S_C_colloquium_lip6_2012_21Les informaticiennes, de la dominance de classe aux discriminations de sexehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=20 Les femmes sont actuellement minoritaires dans la discipline informatique. Toutefois, elles sont tout de même suffisamment nombreuses pour qu’on ne puisse pas les considérer comme des exceptions. Se saisissant de l’idéologie de l’universel républicain qui leur promet l’égalité entre les sexes, les informaticiennes ne se voient pas comme des pionnières. C’est d’autant plus vrai que l’histoire de la discipline nous montre que leur représentation a fortement diminué depuis les années 70. Souvent issues des classes sociales aisées, en position scolaire haute, elles se considèrent comme tout à fait à leur place en informatique. En outre, les représentations genrées de la discipline autorisent une certaine latitude d’interprétation : l’informatique sort du cadre de la représentation classique ce qu’est un métier traditionnellement masculin.<br/> <br/> A l’école, elles peuvent occulter le sexisme tant que leurs résultats sont suffisamment bons. Mais une fois confrontées à une forte concurrence professionnelle, elles ne savent pas décrypter les signaux des rapports sociaux de sexe et réagissent par une auto-accusation à un système qui les discrimine. Mon, 07 Dec 2015 00:00:00 +0100S_C_colloquium_lip6_2012_20Desperately seeking software perfectionhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=19 In the general public, "software" has become synonymous with "bugs" and "security holes". Yet, there exists life-critical software systems that achieve extraordinary levels of reliability, as I'll illustrate with fly-by-wire systems in airplanes. A recent development in this area is the introduction of tool-assisted formal verification (static analysis and program proof) to complement, and sometimes replace, traditional test-based verification. However, the assurance provided by formal verification is limited by the confidence we can have in the verification tools and in the compilers that produce actual executables from verified sources. Using the CompCert verified C compiler as an example, I'll show the effectiveness of formally verifying, with the help of proof assistants, the tools that participate in the construction and verification of critical software. Thu, 12 Nov 2015 10:00:00 +0100S_C_colloquium_lip6_2012_19Proofs, Secrets, and Computationhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=18 We show how Theory of Computation has revolutionized our millenary notion of a proof, revealing its unexpected applications to our new digital world. In particular, we shall demonstrate how interaction can make proofs much easier to verify, dramatically limit the amount of knowledge released, and yield the most secure identification schemes to date.<br/> <br/> <strong>Silvio Micali</strong> has received his Laurea in Mathematics from the University of Rome, and his PhD in Computer Science from the University of California at Berkeley. Since 1983 he has been a member of the MIT faculty, in Electrical Engineering and Computer Science Department, where is currently is the Associate Department Haead.<br/> <br/> Silvio’s research interests are cryptography, zero knowledge, pseudo-random generation, secure protocols, and mechanism design.<br/> <br/> Silvio has received the Turing Award, the Gödel Prize, and the RSA prize. He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences. Fri, 29 May 2015 00:00:00 +0200S_C_colloquium_lip6_2012_18The new era of biology is computationalhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=17 Biology entered a new era, with bioinformatics producing biological data that are impossible nowadays to obtain with wet experiments. Soon scientists and clinicians will use new DNA technologies to detect mutations driving cancer and other diseases, identify new strains of pathogens, map the physiological effects of the microbial communities residing in our organs, track subtle changes in our immune repertoire, predict drug response, and make innumerable other contributions to our health and knowledge of complex biological systems. The scale and complexity of the data will vastly exceed anything the biological and medical community has faced before. Tackling these questions with advanced engineering, new computer algorithms and novel computational approaches is a challenge that will lead to revolutionize biology and medicine through deeper, ubiquitous use of DNA information. Among different examples, we shall present a computational approach to protein-protein interactions that we developed within a project on neuromuscular diseases. The project demands a high computational power to test billions of interactions, it run on the machines of the World Community Grid for more than 3 years, and provided a huge amount of information on the interaction of human proteins. High Performance Computing helped to obtain an unprecedented amount of information on protein-protein interactions between real partners but also, and most importantly, between non-partners.<br/> <br/> <b>Alessandra Carbone</b> is Professor of Computer Science at UPMC and she has led the Analytical Genomics team since 2003. She is the director of the Laboratory of Computational and Quantitative Biology, created in January 2009 by CNRS and UPMC with the aim of developing an interdisciplinary working environment made of several groups of theoreticians and experimentalists interested in bioinformatics and modeling of complex biological systems, systems biology, population genetics. Alessandra Carbone received the Prix Joliot-Curie in 2010 from the Ministère de la Recherche et de l’Enseignement Supérieur and from the EADS Foundation, and she was distinguished in 2012 with the Grammaticakis-Neuman Prize of the Académie des Sciences for “Integrative Biology”. Since 2013 she is a senior member of the Institut Universitaire de France. Mon, 11 May 2015 10:00:00 +0200S_C_colloquium_lip6_2012_17Vers des bases de connaissances personnelleshttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=16 A Web user today has his/her data and information distributed in a number of services that operate in silos. Computer wizards already know how to control their personal data to some extent. It is now becoming possible for everyone to do the same, and there are many advantages to doing so. Everyone should now be in a position to manage his/her personal information. Furthermore, we will argue that we should move towards personal knowledge bases and discuss advantages to do so. We will mention recent works around a datalog dialect, namely Webdamlog<br/> <br/> Serge Abiteboul obtained his Ph.D. from the University of Southern California, and a State Doctoral Thesis from the University of Paris-Sud. He has been a researcher at the Institut National de Recherche en Informatique et Automatique since 1982 and is now Distinguished Affiliated Professor at Ecole Normale Supérieure de Cachan. He was a Lecturer at the École Polytechnique and Visiting Professor at Stanford and Oxford University. He has been Chair Professor at Collège de France in 2011-12 and Francqui Chair Professor at Namur University in 2012-2013. He co-founded the company Xyleme in 2000. Serge Abiteboul has received the ACM SIGMOD Innovation Award in 1998, the EADS Award from the French Academy of sciences in 2007; the Milner Award from the Royal Society in 2013; and a European Research Council Fellowship (2008-2013). He became a member of the French Academy of Sciences in 2008, and a member the Academy of Europe in 2011. He is a member of the Conseil national du numérique and Chairman of the Scientific board of the Société d'Informatique de France. His research work focuses mainly on data, information and knowledge management, particularly on the Web. He founded and is an editor of the blog Tue, 14 Apr 2015 10:00:00 +0200S_C_colloquium_lip6_2012_16Fluidization of discrete event models or a marriage between the discrete and the continuoushttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=15Formal discrete event, hybrid and continuous representations of systems provide “views” of their structure and behaviour in potentially many fields of application. In information technology (also in epidemiology, biochemistry, etc.), discrete event models are frequently “adequate” from a descriptive point of view (idea of faithfulness or accuracy), but the so called “state explosion problem” may eventually make their consideration not very practical.<br /> <br /> Technically speaking, the fluidization of a discrete event model can be viewed as a (partial) relaxation of its state “from the integers into the reals”. From a more conceptual perspective, it can be viewed as a change of the point of view “from the individuals to populations” (i.e., single agents in a given internal state would not be distinguishable). The goal of such kind of relaxation or change of the point of view may be to reduce the computational complexity of the algorithms for certain analysis and synthesis problems, eventually to make decidable some properties. Of course, some price should be paid for that, and certain properties cannot be studied on the hybrid or continuous derived model, while for other properties only “approximate” answers can be obtained. As an important advantage, the more populated the system is, usually the more accurate are the answers obtained and the bigger are the computational savings obtained.<br /> <br /> Taking Petri Nets as a well-known and expressive family of formalisms for discrete event “views”, the presentation will introduce the fluidization and focus on new analysis and synthesis possibilities, as much as on the price paid for that. Thu, 03 Jul 2014 17:13:09 +0200S_C_colloquium_lip6_2012_15Computer Science: All Questions Answeredhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=14Would computer science be recognized as a mature discipline when it had 1,000 deep algorithms? And what is the current state? Why would you think the design of efficient algorithms to be the core of computer science? What is the connection between mathematics and computer programming viewed as an art? Will we have intelligent machines sometime in the future? Should we have them? Any question on any subject, such as the above questions, or software patents, or distraction about fonts on restaurant menus, will get an answer from Donald Knuth, who “has made as great a contribution to the teaching of mathematics for the present generation of students as any book in mathematics proper in recent decades” (statement from the AMS Steele Prize for Exposition in 1986). Thu, 03 Jul 2014 17:13:09 +0200S_C_colloquium_lip6_2012_14Toward a Theory of Trust in Networks of Humans and Computershttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=13We argue that a general theory of trust in networks of humans and computers must be built not only on a theory of computational trust but also on a theory of behavioral trust. Behavioral trust captures participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants. This more comprehensive view of trust would allow us to establish new trust relations where none were possible before. It would help create new economic opportunities by increasing the pool of usable services and removing cooperation barriers among users. It would also help focus security research in areas that promote trust-enhancement infrastructures in human and computer networks.<br /> <br /> This work is joint with Virgil Gligor. Thu, 26 Jun 2014 17:02:03 +0200S_C_colloquium_lip6_2012_13Myths about MOOCs and Software Engineering Educationhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=12The first part of the talk is about teaching Software Engineering. Traditional approaches to software development are often neither supported by tools that students could afford to use, nor appropriate for projects whose scope matched a college course. Hence, instructors lecture about software engineering topics, while students continue to build software more or less the way they always had; thus, software engineering course in practice is often no more than a project course. This sad but stable state of affairs is frustrating to instructors, boring to students, and disappointing to industry.<br /> <br /> Cloud computing and the shift in the software industry towards software as a service has led to highly-productive tools and techniques that are a much better match to the classroom than earlier software development methods. That is, not only has the future of software been revolutionized, it has changed in a way that makes it easier to teach.<br /> <br /> UC Berkeley’s revised Software Engineering course leverages this productivity to allow students to both enhance a legacy application and to develop a new app that matches requirements of non-technical customers. By experiencing whole software life cycle repeatedly within a single college course, students actually use the skills that industry has long encouraged and learn to appreciate them. The course is now heartening to faculty, popular with students, and praised by industry.<br /> <br /> The second part of the talk is about our experience using Massive Open Online Courses (MOOCs) to teach this material. While the media's attention to MOOCs continues unabated, a recent opinion piece expresses grave concerns about their role ("Will MOOCs Destroy Academia?", Moshe Vardi, CACM 55(11), Nov. 2012). I will try to bust a few MOOC myths by presenting provocative, if anecdotal, evidence that appropriate use of MOOC technology can improve on-campus pedagogy, increase student throughput while actually increasing course quality, and help instructors reinvigorate their teaching. I'll also explain the role of MOOCs in enabling half-dozen universities to replicate and build upon our work via Small Private Online Courses (SPOCs) from EdX and our electronic textbook. My conclusion is that the 21st century textbook may be a hybrid of SPOCs and Ebooks.Wed, 21 May 2014 13:08:50 +0200S_C_colloquium_lip6_2012_12L'information mentalehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=11Notre raison s'appuie sur toutes sortes d'éléments de connaissance laborieusement emmagasinés et conservés dans notre réseau cortical pendant de nombreuses années. Malgré les incessantes agressions physico-chimiques que nos neurones subissent depuis leur installation au début de notre vie, ces éléments de connaissance ne sont pas altérés : les tables de multiplication, les prénoms dans notre famille, Le Corbeau et le Renard, tout cela reste bien en place. Cette robustesse et cette pérennité de l'information mentale peuvent être expliquées par les propriétés du codage redondant distribué. Tue, 22 Apr 2014 10:12:38 +0200S_C_colloquium_lip6_2012_11On the Preservation of Digital Informationhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=10Saving bits is not enough. The meaning of the bits must also be preserved along with any meta-data needed for that interpretation. Software that created the bits, interprets and renders the bits and perhaps the operating system needed to run the application and a description of the computer needed to run the software may all need to be preserved. There are intellectual property issues also to be dealt with as companies come and go, software ownership changes, and so on. This is a complex problem especially when the target is preservation for millennia.Tue, 11 Mar 2014 14:32:29 +0100S_C_colloquium_lip6_2012_10Tony Hoare: Laws of concurrent system designhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=9The algebraic laws that govern the behaviour of concurrent systems, with both sequential and concurrent composition, are as simple as the familiar laws of arithmetic learnt at school. They are strong enough to derive the structural rules of Hoare logic, which were designed as a proof system for verification of programs. They also derive the rules of O'Hearn's separation logic. They also derive the rules of a structural operational semantics, such as those used by Milner to define validity of an implementation of CCS. The laws are simpler than each of these calculi separately, and stronger than both of them combined.<br /> The laws are satisfied by a simple graph model of the behaviour of a concurrent system, in which basic actions are nodes, connected by arrows that represent dependency between actions. Such a graph might be produced by a testing tool to help reveal the causes of an error, and decide what to do about it. The model is highly generic, and can be used for systems with different basic actions, expressed in different languages, and at different levels of granularity and abstraction.<br /> I speculate that one day algebraic laws such as these will be accepted as a scientific and semantic basis for a Design Automation toolkit for systems engineering. Its tools will include system verification, program analysis, program generation, compilation and optimisation, test case generation, and error analysisFri, 13 Dec 2013 17:15:28 +0100S_C_colloquium_lip6_2012_9Gilles Dowek: Are formal methods the future of air traffic control?http://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=8Decentralized air traffic control is a concept of operations where air traffic control is decentralized to aircrafts and on board computers are used to assist and/or replace the pilots in deciding the route of the aircraft. In some experimental concepts, the full operation of the aircraft is delegated to on board computers. Such concepts can be accepted by the general population only if the computer systems used are extremely safe and this makes air traffic control a major domain of application for formal methods. The diversity problems in air traffic control leads to favor no particular type of formal methods but to use them all as different methods address different types of problems. Fri, 13 Dec 2013 16:59:51 +0100S_C_colloquium_lip6_2012_8Lauréat prix Gilles Kahn 2012 : Mathieu Feuillethttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=7As wireless networking continues to develop, efficient use of the frequency spectrum, a rare shared resource, is crucial. We show that CSMA, the random-access dynamic bandwidth allocation protocol currently used by WiFi, is inefficient, and we present a simple, fully distributed algorithm that fixes it.<br /> <br /> In this talk, we will give a brief introduction on network stochastic modelling. In particular, we will give an intuition of scaling analysis and stochastic averaging, which are used to analyse stochastic processes resulting from this kind of modelling. In the second part of the talk, we will illustrate these tools on the example of CSMA. In particular, we will explain how this approach allows us to give a precise definition of "efficient" and prove that an algorithm is efficient or not.<br /> <br />The Gilles Kahn Prize is awarded yearly to an excellent PhD thesis in informatics. It was created by the Société informatique de France (SIF, the French learned society in Informatics) and sponsored by the Académie des Sciences. Fri, 19 Jul 2013 11:02:01 +0200S_C_colloquium_lip6_2012_7Lauréat prix Gilles Kahn 2012 : Mathilde Noualhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=5From genetic regulation systems to computation models and also social networks, many systems encountered daily can be seen as networks of interacting entities in which all events that can take place result from interactions between entities. To better understand these systems, one can consider the characteristics of each specific system and aim at modelling it as faithfully as possible. But one can also consider instead a much cruder modelling of their common mechanisms. In these lines, I propose to explore some properties that are intrinsic to being and interaction system. <br /> <br /> The Gilles Kahn Prize is awarded yearly to an excellent PhD thesis in informatics. It was created by the Société informatique de France (SIF, the French learned society in Informatics) and sponsored by the Académie des Sciences.Fri, 19 Jul 2013 11:01:57 +0200S_C_colloquium_lip6_2012_5Lauréat prix Gilles Kahn 2012 : Camille Coupriehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=6 Image processing and analysis techniques are prevalent in various fields, such as medical imaging, materials analysis, or photography. They are based on techniques such as image segmentation, i.e., the extraction of patterns or objects, for the purpose of quantification or visualization. The classical “Watershed” segmentation algorithm “floods” the image to extract contours in the areas where different catchment basins meet. This technique is widely used in image processing, due to its linear complexity and to its ability to segment images in an arbitrary large number of regions.<br /> <br /> The main result of my PhD shows how Watershed can be used to optimize a cost function that is very popular in computer vision and image processing. This work has two main impacts: <ol> <li>It provides a unified model for variational segmentation algorithms.</li> <li>It interprets the Watershed algorithm as a general optimization method, allowing it to be used in a variety of computer vision tasks, such as filtering, surface reconstruction, stereo vision, object class segmentation.</li> </ol> The Gilles Kahn Prize is awarded yearly to an excellent PhD thesis in informatics. It was created by the Société informatique de France (SIF, the French learned society in Informatics) and sponsored by the Académie des Sciences.Fri, 19 Jul 2013 11:01:59 +0200S_C_colloquium_lip6_2012_6Taking Education Online: A Unique Opportunity for the New Milleniumhttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=4In the early years of the third millenium, most professors are still teaching in virtually the same way they were taught and their teachers were taught, stretching back centuries. This situation is likely to change, and soon. Technology is transforming (if not threatening to overwhelm) higher education, as MOOCs and online content become widely available. University students seeking to learn a topic who now have little if any choice are about to be presented with a vast array of choices. What student would not want to swap a tired professor writing slowly on a chalkboard for a well-produced series of videos and associated content, given by a world leader in the field? We are on the verge of a transformation on the scale of the transformation wrought by Gutenburg. This imminent change raises a host of fascinating and far-reaching questions.Wed, 12 Jun 2013 14:11:22 +0200S_C_colloquium_lip6_2012_4Unifying logic and probability: A “New Dawn” for Artificial Intelligence?http://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=3Logic and probability are ancient subjects whose unification holds significant potential for the field of artificial intelligence. A recent cover article in New Scientist went so far as to announce an “Intelligence Revolution” and a “New Dawn for AI”. This talk will explain some of the underlying technical ideas and their application.<br /><br /> Stuart Russell is a Profesor of Computer Science at UC Berkeley and Adjunct Professor of Neurosurgery at UC San Francisco. He is currently visiting UPMC and holds the Chaire Blaise Pascal (*). His research covers many aspects of artificial intelligence and machine learning. He is a fellow of AAAI, ACM, and AAAS and winner of the IJCAI Computers and Thought Award. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in the field. <br /><br /> (*) L'orateur est soutenu par la Chaire Internationale de Recherche Blaise Pascal financée par l'État et la Région Île de France, gérée par la Fondation de l'École Normale Supérieure.Tue, 12 Feb 2013 10:32:12 +0100S_C_colloquium_lip6_2012_3Le génie mathématique, du théorème des quatre couleurs à la classification des groupeshttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=2Il y a trente ans, les ordinateurs faisaient irruption dans les mathématiques, avec la célèbre preuve du théorème des quatre couleurs par Appel et Haken. Au départ limité au simple calcul, leur rôle s'élargit maintenant à des raisonnements dont la complexité dépasse les capacités de la plupart des humains, comme la preuve de la classification des groupes simples finis. Nous venons d'en formaliser la première étape importante, le théorème de Feit-Thompson, à l'aide d'un éventail de méthodes et de techniques, qui vont de la logique formelle au génie logiciel. <br /><br /> Georges Gonthier est chercheur au laboratoire de Cambridge de Microsoft Research, après avoir été à Inria et aux Bell Labs. Ses travaux vont des systèmes embarqués (langage Estérel, fusée Ariane) aux modèles de la concurrence et de la sécurité (join-calcul). Après avoir formalisé la preuve du théorème des quatre couleurs en 2005, il a créé l'équipe du laboratoire Microsoft Research-Inria qui vient de compléter la formalisation du théorème de Feit-Thompson. Il a reçu en 2011 le Grand Prix d'informatique de la Fondation EADS.Fri, 07 Dec 2012 14:08:27 +0100S_C_colloquium_lip6_2012_2Le temps et les événements en informatiquehttp://video.upmc.fr//differe.php?collec=S_C_colloquium_lip6_2012&video=1Le traitement précis du temps et des événements réguliers ou sporadiques est essentiel en informatique embarquée, en simulation et dans bien d'autres domaines, mais très mal réalisé dans la programmation classique. Nous étudions les différents formalismes de spécification et de programmation spécifiquement développés pour cela, la façon de les implémenter correctement, ainsi que les succès et les problèmes ouverts du domaine. <br /><br /> Depuis 1973, Gérard Berry travaille sur la conception de langages de programmation, ainsi que sur la vérification de programmes. Il s'est particulièrement intéressé à la programmation des systèmes réactifs et temps réel, définissant et implémentant le langage Esterel. En 2001, il est devenu Directeur scientifique de la société Esterel Technologies. Il a ensuite rejoint l'Inria en 2009. Depuis septembre 2009, il tient la chaire "Algorithmes, machines et langages" au Collège de France où il donne actuellement le cours "Le temps et les événements en informatique".Tue, 13 Nov 2012 12:05:24 +0100S_C_colloquium_lip6_2012_1