- 1、有哪些信誉好的足球投注网站(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
- 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
1人体行为描述-中国图象图形学报
中图法分类号:TP391.4 文献标识码:A 文章编号:1006-8961(2014) – 140537 论文引用格式:李俊峰, 张飞燕. 基于局部时空特征方向加权的人体行为识别[J].中国图象图形学报,2014 基于局部时空特征方向加权的人体行为识别 李俊峰,张飞燕 浙江理工大学 自动化研究所,浙江 杭州 310012 摘 要:目的:对人体行为的描述是行为识别中的关键问题,为了能够充分利用训练数据从而保证特征对行为的高描述性,本文提出了基于局部时空特征方向加权的人体行为识别方法。方法:首先,将局部时空特征的亮度梯度特征分解为三个方向(、、)分别来描述行为, 通过直接构造视觉词汇表分别得到不同行为三方向特征描述子集合的标准视觉词汇码本,并利用训练视频得到每个行为的标准三方向词汇分布;进而,根据不同行为三方向特征描述子集合的标准视觉词汇码本,分别计算测试视频相应的三方向的词汇分布,并利用与各行为标准三方向词汇分布的加权相似性度量进行行为识别;结果:最后,本文在Weizmann数据库和KTH数据库中进行实验,Weizmann数据库中的平均识别率高达96.04%,KTH数据库中的平均识别率也高达96.93%。结论:与其它行为识别方法相比可以明显提高行为平均识别率。 关键词:行为识别;局部时空特征;视觉词汇表;方向加权 Recognition of human behavior based on the directional weighting local space-time features Li Junfeng, Zhang Feiyan Institute of automation ,Zhejiang Sci-Tech University, Hangzhou 310012,China Abstract:a new method of human activity recognition is put forward in this paper. Method: Firstly, the brightness gradient was decomposed into three directions (、、) which could describe the behavior from different directions respectively. Secondly, their standard visual vocabulary codebooks of three directions for different behavior could be obtained by constructing visual vocabulary directly. Moreover, based on the standard visual vocabulary codebooks of the three directions for each behavior, the corresponding vocabulary distributions of the test video were calculated separately. And the behavior of the test video might be recognized by using the weighted similarity measure between each behavior’s standards vocabulary distribution and vocabulary distribution of the test video. Result: Finally, the performance was investigated in the KTH action dataset and Weizmann action dataset. And we had obtained the average recognition rate of 96.04% accuracy in Weizmann action dataset and 96.93% accuracy in KTH action dataset. Conclusion: Our method could generate a more comprehensive and effective representation for action videos. And, it can reduce the clustering time by producing codebooks of each direction. The experi
文档评论(0)