|15 – 17 March 2018, Hanoi, Vietnam|
In observation-based science, the objectives of an analysis are often to associate measurables with an unknown state (of a satellite) or to infer latent variables from measurements. Because in satellite analyses, ground truth labels are often unavailable, data-driven techniques to calibrate confidence levels, e.g. cross-validation, are not suitable. On the other hand, analysis results are required to have calibrated and traceable confidence levels. We found some success in using (Bayesian) Statistical Machine Learning techniques such as Gaussian Process, Hidden Markov and other Bayesian techniques in object classification, signature prediction and detection of change points. The techniques generally produce calibrated probabilities. While the signature from a remote GEO object is inherently a time series, spectral and concurrent measurements provide the opportunity and challenge of information fusing which can also be handled with simplicity in ML. We will show some successful applications of ML in photometric analyses of satellites.
Dr. Phan Dao is a researcher in satellite and space object electro-optical signature analysis at the Air Force Research Laboratory, Space Vehicles Directorate. His research includes aspects of space object characterization and space debris environment. His current focus is on the development of analytical tools with increasing reliance on Statistical Machine Learning techniques. He has been with the AFRL for 30 years mainly in Research and Development. Prior to the activities in space technology, he was involved in the development and application of remote sensors and lidar for atmospheric, chemical and particle diagnostics. Prior to AFRL, he was with KMS Fusion, Ann Arbor, MI, developing novel laser concepts. He received a Ph.D. in Physics from the University of Colorado in Boulder, Colorado, in 1985.
In this study we show a first step toward nonlinear statistics by applying Choquet calculus to probability theory. Throughout the study we take a constructive approach. For nonlinear statistics, we first consider a distorted probability space on the nonnegative real line. A distorted probability measure is derived from a conventional probability measure by a monotone transformation with a generator (usually called a distortion function), where we deal with two classes of parametric generators. Next, we explore some properties of Choquet integrals of non-negative continuous functions with respect to distorted probabilities. Then, we calculate basic statistics such as the distorted mean and variance of a random variable for uniform, exponential and Gamma distributions.
In addition, we consider Choquet integrals of real-valued functions to deal with a distorted probability space on the real line. We also calculate basic statistics for uniform and normal distributions.
After graduating from the Department of Physics, the University of Tokyo, Michio Sugeno worked at a company for three years. Then, he served the Tokyo Institute of Technology as Research Associate, Associate Professor and Professor from 1965 to 2000. After retiring from the Tokyo Institute of Technology, he worked as Laboratory Head at the Brain Science Institute, RIKEN from 2000 to 2005 and, then as Distinguished Visiting Professor at Doshisha University from 2005 to 2010. Finally, he worked as Emeritus Researcher at the European Centre for Soft Computing in Spain from 2010 to 2015. He is Emeritus Professor at the Tokyo Institute of Technology. He was President of the Japan Society for Fuzzy Theory and Systems from 1991 to 1993, and also President of the International Fuzzy Systems Association from 1997 to 1999. He is the first recipient of the IEEE Pioneer Award in Fuzzy Systems with Zadeh in 2000. He also received the 2010 IEEE Frank Rosenblatt Award and Kampé de Feriét Award in 2012.
We live in an exciting time: in the midst of the new technological revolution. Unlike the previous industrial revolution that extended mostly our physical powers, this revolution has the potential to extend our cognitive powers. One of the basic ingredients that enabled this revolution is the emergence of the so-called "big data": the datasets that are not only large in size but also much more complex and granular than the "old datasets". However, with big data come big problems: many of the standard tools that we used for data analysis are no longer appropriate when applied to big data. In this talk, I will discuss some of the challenges in developing new tools for the analysis of big data and also some of their limitations. In particular, I will address the relations between the predictive and causal modeling, and the reproducibility crisis in science. I will also discuss some of the new opportunities in the area of large scale distributed decision making including the new developments in crowdsourcing, computational social choice and personalized learning.
Dr. Pedja Neskovic is a program officer at ONR where he oversees the mathematical data science, and computational methods for decision making programs. He is also an adjunct associate professor of brain science at Brown University and a visiting professor at Johns Hopkins University. Dr Neskovic received his BSc in theoretical physics from Belgrade University, and his PhD in physics from Brown University. He was a postdoctoral fellow at the Institute for Brain Science at Brown University. Within the scope of his programs, he is addressing various basic research problems. These include methods for the analysis of big data and small data, analysis of complex networks such as social and brain networks, reproducibility in science, and causal inference. He is also interested in developing methods for large-scale distributive decision making that utilize novel crowdsourcing and collaborative techniques.
Technologies in Business school have evolved from the days of MIS, DSS to Big Data and Business Analytics. The current Artificial Intelligence booming is largely impacting different industries and Business Schools, especially in the finance and accounting areas. As an example, the new buzzword FinTech is catching more and more attentions from both finance industry and business schools. This talk will discuss how the technologies of Internet, Big Data, and Artificial Intelligence influence industries and business school education. It will also explore the challenges and opportunities in research and applications.
Dr. Qiang Ye is the Dean and Professor of Information Systems in the School of Management at Harbin Institute of Technology. He had worked in Mccombs School of Business Administration at the University of Texas Austin, Randy School of Management in the University of California San Diego and School of Hotel & Tourism Management at the Hong Kong Polytechnic University as Post Doctoral Fellow, Research Fellow or Visiting Professor.
Dr. Ye is Senior Editor of Journal of Electronic Commerce Research, Area Editor of Electronic Commerce Research and Applications and guest Associate Editor of MIS Quarterly. His research areas of interest include Big Data and Business Analytics, e-Commerce, e-Tourism, and FinTech, et al. He had published about thirty papers in journals including Production Operations Management, Tourism Management and Decision Support Systems, et al. Dr. Ye received the “National Science Fund of China for Distinguished Young Scholars” in 2012 and entitled “Cheung Kong Scholar Professor” by the Ministry of Education of China in 2016.
Computationally predicting drug-target interactions is useful to discover potential new drugs. Currently, promising machine learning approaches for this issue use not only known drug-target interactions but also drug and target similarities. This idea can be well-accepted pharmacologically, since the two types of similarities correspond to two recently advocated concepts, so-called, the chemical space and the genomic space. I will start this talk by describing detailed background on the problem of predicting drug-target interactions, particularly why similarity-based approaches have been paid attention to now. I will then move on to the existing approaches and their bottlenecks and further present recent factor model-based approaches, which allow low-rank approximation of given matrices, by which the issues of the past methods can be properly considered. Also I note that the problem setting of similarity-based predicting of drug-target interactions is very general, in the sense of binary relations between two sets of events, in which events have similarities each other. This general setting can be found in many applications, such as recommender systems.
Hiroshi Mamitsuka is a Professor of Bioinformatics Center, Institute for Chemical Research, Kyoto University, being jointly appointed as a Professor of School of Pharmaceutical Sciences of the same university. Also he is a FiDiPro (Finland Distinguished Professor Program) Professor of Department of Computer Science, Aalto University, Finland. His current research interest includes a variety of aspects of machine learning and their diverse applications, mainly cellular- or molecular-level biology, chemistry and medical sciences. He has published more than 100 scientific papers, including those appeared in top-tier conferences or journals in machine learning and bioinformatics, such as ICML, KDD, ISMB, Bioinformatics, etc. Also he has served program committee member of numerous conferences and associate editor of several well-known journals of the related fields. Prior to joining Kyoto University, he worked in industry for more than ten years, on data analytics in business sectors, such as customer/revenue churn, web-access pattern, campaign management, collaborative filtering, recommendation engine, etc.