Bo Zhang . Professor Bo Zhang is now a professor of Computer Science and Technology Department of Tsinghua University, the fellow of Chinese Academy of Sciences. In 1958 he graduated from Automatic Control Department of Tsinghua University, and becamea faculty member since then. From 1980/02 to 1982/02 he visited University of Illinois at Urbana-Champaign, USA as a scholar. He is now the chairman of steerin committee of Research Institute of Information Technology, Tsinghua University, the technical advisor of Fujian government, and the member of Technical Advisory Board of Microsoft Research Asia.
He is engaged in the research on artificial intelligence, artificial neural networks, genetic algorithms, intelligent robotics, pattern recognition and intelligent control. In these fields, he has published over 150 papers and 4 monographs, where 2 are English versions.
Speech Title: Granular Computing and Computational Complexity
ABSTRACT: Granular computing is to imitate human’s multi-granular computing strategy to problem solving in order to endow computers with the same capability. Its final goal is to reduce the computational complexity. To the end, based on the simplicity principle the problem at hand should be represented as simpler as possible. From structural information theory, it’s known that if a problem is represented at different granularities, the hierarchical description of the problem will be a simpler one. The simpler the representation the lower the computational complexity of problem solving should be.
We presented a quotient space theory to multi-granular computing. Based on the theory a problem represented by quotient spaces will have a hierarchical structure. Therefore, the quotient space based multi-granular computing can reduce the computational complexity in problem solving. In the talk, we’ll discuss how the hierarchical representation can reduce the computational complexity in problem solving by using some examples.
Ian H. Witten. Ian H. Witten is Professor of Computer Science at the University of Waikato in New Zealand where he directs the New Zealand Digital Library research project. His research interests include language learning, information retrieval, and machine learning. He has published widely, including several books, such as Managing Gigabytes (1999), How to build a digital library (2003), Data Mining (2005) and Web Dragons (2007). He is a Fellow of the ACM and of the Royal Society of New Zealand. He received the 2004 IFIP Namur Award, a biennial honour accorded for “outstanding contribution with international impact to the awareness of social implications of information and communication technology” and (with the rest of the Weka team) the 2005 SIGKDD Service Award for “an outstanding contribution to the data mining field” and in 2006 the Royal Society of New Zealand Hector Medal for “an outstanding contribution to the advancement of the mathematical and information sciences.”
Speech Title: Wikipedia and how to use it for semantic document representation
ABSTRACT: Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks.
This talk focuses on the process of "wikification"; that is, automatically and judiciously augmenting a plain-text document with pertinent hyperlinks to Wikipedia articles—as though the document were itself a Wikipedia article. I first describe how Wikipedia can be used to determine semantic relatedness between concepts. Then I explain how to wikify documents by exploiting Wikipedia's internal hyperlinks for relational information and their anchor texts as lexical information. Data mining techniques are used throughout to optimize the models involved.
I will discuss applications to knowledge-based information retrieval, topic indexing, document tagging, and document clustering. Some of these perform at human levels. For example, on CiteULike data, automatically extracted tags are competitive with tag sets assigned by the best human taggers, according to a measure of consistency with other human taggers. All this work uses English, but involves no syntactic parsing, so the techniques are language independent.
Roman Slowinski. Roman Slowinski was born in 1952 in Poznan, Poland. He earned his PhD in 1977 in Computer Science from the Poznan University of Technology and Dr. Habil. in Decision Sciences, also from Poznan University of Technology in 1981. Roman Slowinski is Professor since 1989 and Founding Head of the Laboratory of Intelligent Decision Support Systems within the Institute of Computing Science, Poznan University of Technology, Poland. Since 2002 he also holds a Professor’s position at the Systems Research Institute of the Polish Academy of Sciences in Warsaw. He has been professor on European Chair at the University of Paris Dauphine, and invited professor at the Swiss Federal Institute of Technology in Lausanne, at the University of Catania and at Polytech’Tours. Roman Slowinski has conducted extensive research on the methodology and techniques of decision aiding, including multiple criteria decision making, preference modeling, modeling of uncertainty in decision problems, and knowledge-based decision support. This methodology cleverly combines Operations Research and Computational Intelligence. Today Roman Slowinski is perhaps best known for his seminal work on using rough sets in decision analysis. He started this work with the founder of the rough set concept, the late Zdzislaw Pawlak in 1983, and continued with Salvatore Greco and Benedetto Matarazzo since the beginning of the 90’s. He organized the First International Workshop on Rough Set Theory and Applications in Poznan, in 1992. In 2010, he has been elected President of the International Rough Set Society (IRSS). His record of publications includes 14 monographs, and over 380 scientific articles in international journals and edited volumes. He has supervised 24 Ph.D. theses in Operations Research and Computer Science. Roman Slowinski is the Editor-in-Chief of the European Journal of Operational Research (EJOR) since 1999. He is recipient of the EURO Gold Medal (1991) and the MCDM Society’s Edgeworth-Pareto Award (1997). In 2004, he was elected member of the Polish Academy of Sciences, a corporation of 350 outstanding Polish scholars. In 2005, he received the Annual Prize of the Foundation for Polish Science, regarded as the most prestigious scientific award in Poland. Additional recognitions include Doctor Honoris Causa of Polytechnic Faculty of Mons (2000), University of Paris Dauphine (2001) and Technical University of Crete (2008).
Speech Title: Knowledge Discovery about Preferences using the Dominance-based Rough Set Approach
ABSTRACT:The aim of scientific decision aiding is to give the decision maker a recommendation concerning a set of objects (also called alternatives, solutions, acts, actions, . . . ) evaluated from multiple points of view considered relevant for the problem at hand and called attributes (also called features, variables, criteria, . . . ). On the other hand, a rational decision maker acts with respect to his/her value system so as to make the best decision. Confrontation of the value system of the decision maker with characteristics of the objects results in expression of preferences of the decision maker on the set of objects. In order to recommend the most-preferred decisions with respect to classification, choice or ranking, one must identify decision maker’s preferences. In this presentation, we review multi-attribute preference models, and we focus on preference discovery from data describing some past decisions of the decision maker. The considered preference model has the form of a set of “if..., then...” decision rules induced from the data. In case of multi-attribute classification the syntax of rules is: “if performance of action a is better (or worse) than given values of some attributes, then a belongs to at least (at most) given class”, and in case of multi-attribute choice or ranking: “if action a is preferred to action b in at least (at most) given degrees with respect to some attributes, then a is preferred to b in at least (at most) given degree”. To structure the data prior to induction of such rules, we use the Dominance-based Rough Set Approach (DRSA). DRSA is a methodology for reasoning about ordinal data, which extends the classical rough set approach by handling background knowledge about ordinal evaluations of objects and about monotonic relationships between these evaluations. We present DRSA to preference discovery in case of multi-attribute classification, choice and ranking, in the case of single and multiple decision makers, and in the case of decision under uncertainty and time preference. The presentation is mainly based on publications [1,2,3].
 S.Greco, B.Matarazzo, R.Slowinski: Dominance-based rough set approach to decision involving multiple decision makers. [In]: Rough Sets and Current Trends in Computing
(RSCTC 2006). LNAI 4259, Springer, Berlin, 2006, pp. 306-317.
 S.Greco, B.Matarazzo, R.Slowinski: Dominance-based rough set approach to decision under uncertainty and time preference. Annals of Operations Research, 176 (2010) 41-75.
 R.Slowinski, S.Greco, B.Matarazzo: Rough Sets in Decision Making. [In]: R.A.Meyers (ed.): Encyclopedia of Complexity and Systems Science, Springer, New York, 2009, pp. 7753-7786.
Deyi Li.Deyi Li, graduated at the Electronic Engineering Dept., South East Univ. in 1967, received his PhD in Computer Science Dept., Heriot-Watt Univ. Edinburgh UK in 1983. He was elected as the member of Chinese Academy of Engineering in 1999, the member of Eurosian Academy of Science in 2004 respectively. At present, he is a professor in Qinghua Univ., the director at Dept. of Information Science, Natural Science Foundation of China, the vice president of both Chinese Institute of Electronics and Chinese Association of Artificial Intelligence. He has published over 120 papers on a wide range of topics in artificial intelligence and 4 monographs, owned Premium Award given by IEE Headquarters 1984/85, and the IFAC world congress outstanding paper 1999, currently interested in networked data mining, artificial intelligence with uncertainty, cloud computing, and cognitive physics.
Speech Title: Comparative Study on Mathematical Foundations of Type-2 Fuzzy Set, Rough Set and Cloud Model
ABSTRACT:Mathematical representation of a concept with uncertainty is one of foundations of Artificial Intelligence. The type-2 fuzzy set introduced by Mendel studies fuzziness of the membership grade of a concept. Rough set proposed by Pawlak defines an uncertain concept through two crisp sets. Cloud model, based on probability measure space, automatically produces random membership grades of a concept through a cloud generator. The three methods all concentrate on the essentials of uncertainty and have been applied in many fields for more than ten years. However, their mathematical foundations are quite different. The detailed comparative study on the three methods will discover the relationship in the betweens, and provide a fundamental contribution to Artificial Intelligence with uncertainty.
Jianchang (JC) Mao.Dr. Jianchang (JC) Mao is a Vice President and the head of Advertising Sciences in Y! Labs, overseeing the R&D of advertising technologies and products, including Search Advertising, Contextual Advertising, Display Advertising, Targeting, and Categorization. He was also a Science/Engineering director responsible for development of backend technologies for several Yahoo! Social Search products, including Y! Answers and Y! MyWeb (Social Bookmarks). Prior to joining Yahoo!, Dr. Mao was Director of Emerging Technologies & Principal Architect at Verity Inc., a leader in Enterprise Search (acquired by Autonomy), from 2000 to 2004. Prior to this, Dr. Mao was a research staff member at the IBM Almaden Research Center from 1994 to 2000. Dr. Mao's research interest includes Machine Learning, Data Mining, Information Retrieval, Computational Advertising, Social Networks, Pattern Recognition and Image Processing. He received an Honorable Mention Award in ACM KDD Cup 2002, IEEE Transactions on Neural Networks Outstanding Paper Award in 1996, and Honorable Mention Award from the International Pattern Recognition Society in 1993. Dr. Mao served as an associate editor of the IEEE Transactions on Neural Networks, 1999-2000. He received his Ph.D. degree in Computer Science from Michigan State University in 1994.
Speech Title: Scientific Challenges in Contextual Advertising
ABSTRACT:Online advertising has been fueling the rapid growth of the Web that offers a plethora of free web services, ranging from search, email, news, sports, finance, and video, to various social network services. Such free services have accelerated the shift in people's media time spend from offline to online. As a result, advertisers are spending more and more advertising budget online. This phenomenon is a powerful ecosystem play of users, publishers, advertisers, and ad networks. The rapid growth of online advertising has created enormous opportunities as well as technical challenges that demand computational intelligence. Computational Advertising has emerged as a new interdisciplinary field that studies the dynamics of the advertising ecosystem to solve challenging problems that rise in online advertising.
In this talk, I will provide a brief introduction to various forms of online advertising, including search advertising, contextual advertising, guaranteed and non-guaranteed display advertising. Then I will focus on the problem of contextual advertising, which is to find the best matching ads from a large ad inventory to a user in a given context (e.g., page view) to optimize the utilities of the participants in the ecosystem under certain business constraints (blocking, targeting, etc). I will present a problem formulation and describe scientific challenges in several key technical areas involved in solving this problem, including user understanding, semantic analysis of page content, user response prediction, online learning, ranking, and yield optimization.
Sankar K. Pal (www.isical.ac.in/~sankar) is the Director and a Distinguished Scientist of the Indian Statistical Institute. Currently, he is also a J.C. Bose Fellow of the Govt. of India. He founded the Machine Intelligence Unit and the Center for Soft Computing Research: A National Facility in the Institute in Calcutta. He received a Ph.D. in Radio Physics and Electronics from the University of Calcutta in 1979, and another Ph.D. in Electrical Engineering along with DIC from ImperialCollege, University of London in 1982.
Prof. Pal is a Fellow of the IEEE, USA, the Academy of Sciences for the Developing World (TWAS), Italy, International Association for Pattern recognition, USA, International Association of Fuzzy Systems, USA, and all the four National Academies for Science/Engineering in India. He is a co-author of fifteen books and more than three hundred research publications in the areas of Pattern Recognition and Machine Learning, Image Processing, Data Mining and Web Intelligence, Soft Computing, Neural Nets, Genetic Algorithms, Fuzzy Sets, Rough Sets and Bioinformatics.
Prof. Pal is/ was an Associate Editor of IEEE Trans. Pattern Analysis and Machine Intelligence (2002-06), IEEE Trans. Neural Networks [1994-98 & 2003-06], Neurocomputing (1995-2005), Pattern Recognition Letters, Int. J. Pattern Recognition & Artificial Intelligence, Applied Intelligence, Information Sciences, Fuzzy Sets and Systems, Fundamenta Informaticae, LNCS Trans. On Rough Sets, Int. J. Computational Intelligence and Applications, IET Image Processing, J. Intelligent Information Systems, and Proc. INSA-A; Editor-in-Chief, Int. J. Signal Processing, Image Processing and Pattern Recognition; a Book Series Editor, Frontiers in Artificial Intelligence and Applications, IOS Press, and Statistical Science and Interdisciplinary Research, World Scientific;a Member, Executive Advisory Editorial Board, IEEE Trans. Fuzzy Systems, Int. Journal on Image and Graphics, and Int. Journal of Approximate Reasoning; and a Guest Editor of IEEE Computer.
Speech Title: F-granulation, Generalized Rough Entropy and Pattern Recognition
ABSTRACT:The role of rough sets in uncertainty handling and granular computing is described. The significance of its integration with other soft computing tools and the relevance of rough-fuzzy computing, as a stronger paradigm for uncertainty handling, are explained. Different applications of rough granules and certain important issues in their implementations are stated. Three tasks such as class-depedendent rough-fuzzy granulation for classification, rough-fuzzy clustering and defining generalized rough sets for image ambiguity measures and anlysis are then addressed in this regard, explaining the nature and characteristics of granules used therein.
Merits of class dependentgranulation together with neighborhood rough sets for feature selection are demonstreted in terms of different classification indices. Significance of a new measure, called "dispersion" of classification performance, which focuses on confused classes for higher level analysis, is explained in this regard. Superiority of rough-fuzzy clustering is illustrated for determining bio-bases (c-medoids) in encoding protein sequence for analysis. Generalized rough sets using the concept of fuzziness in granules and sets are defined both for equivalence and tolerance relations. These are followed by the definitions of entropy and different image ambiguities. Image ambiguity measures, which take into account both the fuzziness in boundary regions, and the rough resemblance among nearby gray levels and nearby pixels, have been found to be useful for various image analysis operations
The talk concludes with stating the future directions of research and challenges.