Objectives

    Representation learning has recently enjoyed enormous success in learning useful representations of semantic data from various areas such as vision, speech, or natural language processing. It has developed at the crossroad of different disciplines and application domains, and as a research field by itself with several successful workshops or sessions at major machine learning and data mining conferences. We take here a broad view of this field and want to attract researchers concerned with statistical learning of representations, including matrix- and tensor-based latent factor models, probabilistic latent models, metric learning, graphical models and also recent techniques such as deep learning, feature learning, compositional models, and issues concerned with non-linear structured prediction models.

    The first Representation Learning for Semantic Data (ReLSD 2014) Workshop has been held co-located with ECML/PKDD 2014, which received great attentions and participants from many researchers and companies. This year, we seek to hold the Second ReLSD workshop in a more capacious platform as in ICDM 2015. The focus of this workshop will be on representation learning approaches applied to semantic data emerging from real world. New models and learning algorithms based on representation learning that can address all of the challenges in semantic data mining are encouraged. This one-day workshop will include a mixture of invited talks, and contributed presentations, which will cover a broad range of subjects pertinent to the workshop theme. Besides classical paper presentations, the call also includes demonstration for applications on these topics. We believe this workshop will accelerate the process of identifying the power of representation learning operating on semantic data.

Topics of Interest

    The focus of this workshop will be on representation learning approaches, including deep learning, feature learning, metric learning, algebraic and probabilistic latent models, dictionary learning and other compositional models, to solving problems in semantic data mining. Papers on new models and learning algorithms that combine aspects of the two fields of representation learning and semantic data mining are especially welcome.

    A non-exhaustive list of relevant topics:
    - unsupervised representation learning and its applications
    - supervised representation learning and its applications
    - metric learning and kernel learning and its applications
    - hierarchical models on data mining
    - optimization for representation learning
    - other related applications based on representation learning.

    We also encourage submissions which relate research results from other areas to the workshop topics.

Workshop Organizers

  • Patrick Gallinari: Université Pierre et Marie Curie, France
    ( )
  • Sang-Wook Kim:Hanyang University, Korea
    ( )
  • Jun Guo:Beijing University of Posts and Telecommunications, China
    ( )
  • Sheng Gao: Beijing University of Posts and Telecommunications, China
    ( )

Program Committee (Tentative list)

  • Thierry Artieres, Université Pierre et Marie Curie, France
  • Samy Bengio, Google, USA
  • Yoshua Bengio, University of Montreal, Canada
  • Antoine Bordes, Facebook NY, USA
  • Leon Bottou, MSR NY, USA
  • Joachim Buhman, ETH Zurich, Switzerland
  • Zheng Chen, Microsoft, China
  • Ronan Collobert, IDIAP, Switzerland
  • Patrick Fan, Virginia Tech, USA
  • Patrick Gallinari, Université Pierre et Marie Curie, France
  • Huiji Gao, Arizona State University, USA
  • Marco Gori, University of Siena, Italy
  • Sheng Gao, Beijing University of Posts and Telecommunications, China
  • Jun He, Renmin University, China
  • Sang-Wook Kim, Hanyang University, Korea
  • Sefanos Kollias, NTUA, Greece
  • Hugo Larochelle, University of Sherbrooke, Canada
  • Zhanyu Ma, Beijing University of Posts and Telecommunications, China
  • Yann Lecun, NYU Courant Institute and Facebook, USA
  • Nicolas Leroux, Criteo, France
  • Dou Shen, Baidu, China
  • Alessandro Sperduti, University of Padova, Italy
  • Shengrui Wang, University of Sherbrooke, Canada
  • Jason Weston, Google NY, USA
  • Jun Yan, Microsoft, China
  • Guirong Xue, Ali, China
  • Shuicheng Yan, National University of Singapore, Singapore
  • Kai Yu, Baidu, China
  • Benyu Zhang, Google, USA
  • Jun Xu, CAS, China

Program

    08:30 – 08:35 | Welcome
    08:35 – 09:15 | Invited Talk: Prof. Fei Wang. Learning Effective Representations with Electronic Health Records
    09:15 – 10:00 | Invited Talk: Prof. Hanghang Tong. Inside the Atoms: Mining a Network of Networks and Beyond
    10:05 – 10:15 | Break
    10:15 – 10:35 | Babak Saleh and Ahmed Elgammal, A unified framework for painting classification
    10:35 – 10:55 | Ji Qi and Yukio Ohsawa, Matrix Plane Model: A Novel Measure of Word Co-occurrence and Application on Semantic Relatedness
    10:55 – 11:15 | Juncen Li, Sheng Gao, Zhou Fang, Jianxin Liao, Zhiqing Lin. Music Mood Classification via Deep Belief Network
    11:15 – 11:35 | Cedric De Boom, Steven Van Canneyt, Steven Bohez, Thomas Demeester, and Bart Dhoedt, Learning Semantic Similarity for Very Short Texts
    11:35 –11:55 | Zhou Fang, Sheng Gao, Juncen Li and Bo Li, Cross-Domain Recommendation via Tag Matrix Transfer
    11:55 –12:15 | Junjie Yan, Yuanyuan Qiao, Jie Yang, and Sheng Gao, Mining Individual Mobile Internet User Behavior on Location and Interests

    12:15 –12:20 | Conclusion

Submission of Papers

    We invite two types of submissions for this workshop:

  • Paper submission

    We welcome submission of unpublished research results. Paper length should be limited to a maximum of 8 pages in the IEEE 2-column format. Papers should be typeset using the IEEE Computer Society proceedings manuscript style, though the submissions do not need to be anonymous. All submissions will be anonymously peer reviewed and will be evaluated on the basis of their technical quality. Submissions must be made through the official system: https://wi-lab.com/cyberchair/2015/icdm15/scripts/ws_submit.php. By the unique ICDM tradition, all accepted workshop papers will be published in a formal proceedings published by the IEEE Computer Society Press. We will also motivate the contributors by selecting some accepted papers to publish in a special issue “Deep Machine Learning for Semantic Data” in a SCI Journal, which will come out in early 2016.

  • Demo submission

    A one-page description of the demonstration in a free format is encouraged.