An investigation of computational and informa-tional limits in gaussian mixture clustering.pdf

An investigation of computational and informa-tional limits in gaussian mixture clustering.pdf

  1. 1、本文档共8页,可阅读全部内容。
  2. 2、有哪些信誉好的足球投注网站(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
An investigation of computational and informa-tional limits in gaussian mixture clustering

An Investigation of Computational and Informational Limits in Gaussian Mixture Clustering Nathan Srebro? nati@cs.toronto.edu Gregory Shakhnarovich? gregory@cs.brown.edu Sam Roweis? roweis@cs.toronto.edu ? Dept. of Computer Science, University of Toronto, Toronto, Ontario, CANADA ? Dept. of Computer Science, Brown University, Providence, Rhode Island, USA Abstract We investigate under what conditions clus- tering by learning a mixture of spherical Gaussians is (a) computationally tractable; and (b) statistically possible. We show that using principal component projection greatly aids in recovering the clustering using EM; present empirical evidence that even using such a projection, there is still a large gap between the number of samples needed to re- cover the clustering using EM, and the num- ber of samples needed without computational restrictions; and characterize the regime in which such a gap exists. 1. Introduction Consider clustering a collection of points by fitting a mixture-of-Gaussians model to the data. Viewed as a problem of optimizing an objective function, such as the likelihood, this problem seems to be hard in the traditional worst-case sense. On the other hand, when the data is inherently clustered, and enough data is available, local search methods typically succeed in optimizing the objective and recovering the clustering. This leads to the conventional wisdom that “clustering is not hard—it is either easy, or not interesting”. How true is this statement? Is there a regime in which clustering is hard even though it is interesting? When is clustering hard? Lately, a series of theoretical results established that if data is generated from an adequately separated mixture of Gaussians, and enough samples are avail- able, then clustering is in fact easy—polynomial time Appearing in Proceedings of the 23 rd International Con- ference on Machine Learning, Pittsburgh, PA, 2006. Copy- right 2006 by the author(s)/owner(s). algorithms exist that can recove

文档评论(0)

l215322 + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档