TutorialsThree Tutorials will be held on September 3. All of these are free for registered participants.
"Data Mining and Knowledge Discovery"
by Professor Ho To Bao, JAIST
Knowledge discovery and data mining (KDD) has become an active and growing interdisciplinary area of information technology. It is not only of academic interest, but also of great practical significance. In the past few years, KDD attracted a large number of researchers and practitioners from many disciplines, e.g., machine learning, databases, AI, statistics, data visualization, high performance computing, etc. KDD is important simply because as long as one has a database, one can make use of this technology to uncover useful information.
This tutorial provides an intuitive introduction to basic concepts and techniques of KDD. It will cover the following main topics:
"Grid Computing & OGSA"
by Mr. K. Sugimoto and Ms. Y. Sawatani, IBM
Grid computing, that is distributed computing over the Internet using open standards, has recently attracted considerable attention. The capability is to form virtual, collaborative organizations that share applications and data in an open heterogeneous computing environment, and to aggregate large amounts of computing resources which are geographically dispersed in order to tackle large problems and workloads as if all the computing resources were located in a single site. Grid computing is already emerging in a number of key applications. In particular, the scientific and technical communities are using it to collaborate across institutions around the world in such application areas as high-energy physics, life sciences, and engineering design. In this tutorial, we first present an overview of grid computing that covers:
Then we focus on the Open Grid Services Architecture (OGSA), an evolving Globus architecture integrating Grid computing and Web services concepts and technologies. We cover:
"Actuality and Trend of High Performance Computing and Parallelizing Compilations"
by Professor Minyi Guo, Aizu University
In recent years, high performance computing is more and more widely used in various aspects of our real life. There have been major efforts in developing approaches to parallelization of scientific applications. Parallelizing compilers play an important role by automatically customizing programs for complex processor architectures, improving portability and providing high performance to non-expert programmers.
In this tutorial, we first retrospect the techniques used in current parallelizing compilations. The parallelizing compilations can be classified as automatic parallelization and parallelizing compilers for parallel programming languages. For the automatic parallelization, we summarize various parallelism detection techniques such as data dependence, loop restructuring, data distribution, symbol analysis, inter-procedural analysis, etc. For the parallelizng compilers for parallel programming languages, we introduce the implementing techniques of HPF and OpenMP, which are most typical data parallel language for distributed memory multicomputers and task parallel language for shared memory multiprocessors, respectively.
Additionally, we give some experience for high performance programming with MPI, HPF, and OpenMP in the scientific computations. With our experience, we will prompt the better parallel solution for different kinds of computation problems.
We also prospect the trend of techniques of high performance computing and parallelizing compilations. We especially introduce the development of multigrain parallelism techniques, commutativity analysis of parallelizing compilers, and the development of HPF-OpenMP combination compilers.