美文网首页数据科学
K-means Hashing: Learning Binary

K-means Hashing: Learning Binary

作者: mogu酱 | 来源:发表于2016-09-07 11:18 被阅读188次

    原文:http://blog.csdn.net/liuheng0111/article/details/52242907

    K-means Hashing: anAffinity-Preserving Quantization Method for Learning Binary Compact Codes论文理解:

    1.概述

    使用不同距离计算的方法划分两大流派:

    •       Hamming-basedmethods  (LSH)

    •        lookup-based methods (vector quantization,product quantization)

    Hamming-based量化:用超平面,kernelized超平面。(一个超平面用一个bit编码)

    lookup-based量化:k-means

    •       Hamming-based的方法检索速度快,1.5ms内可以扫描完1百万64bit的hamming码,但是量化使用超平面,误差大,检索结果又没有基于查找表的好

    •       lookup-based的方法使用k-means量化,在最小化量化误差上是最优的,使用相同的编码长度,具有更高的精度,但距离计算比Hamming-based慢。(把每个聚类中心之间的距离存放在一张表里)

    2.思想:

    K-means Hashing:同时考虑了量化和距离计算

    •       量化:Affinity-PreservingK-means。k-均值聚类阶段保留了欧式距离和hamming距离相似性

    •       K-meansHashing:结合了k-means量化误差小,Hamming计算距离快的优点

    3.具体实现

    codebook codeword

    map a d-dimensional vector       

             to another vector

    q(x) ∈ C = {ci | ci∈Rd, 0 ≤ i ≤ k − 1}. The set C

    is known as a codebook, ci is a codeword, and k is thenumber of codewords. Givenb bits for indexing, there are at most    

              codewords

    vector quantization

    •       VQ:用两个向量码字之间的距离近似代替两个向量之间的距离。

    •      

    i(x)表示x所在的cell的索引。建立一个k*k维的codewords之间距离查找表

    •       利用hamming距离计算的优点

    4.A Naive Two-step Method

    •       第一步:通过K-means量化得到

    个码字

    •       第二步:给每一个码字分配一个最优索引。

    •       combinatoriallycomplex: there are (2b)! (b是编码的位数)

    •       b≤3bits this problem is feasible. Whenb = 4 it takes over one dayfor exhausting, and ifb > 4 it is highly intractable.

    Affinity-Preserving K-means

    •       NaiveMethod 没有考虑第一步k-means的量化误差,本论文,同事考虑quantizationerror and the affinity error

    •      

    •      

    求解过程分分为两步,进行迭代求解:

    Assignment step: fix{ci}and optimize i(x).    类似于k-means分类,把x分配到距离最近的码字

    Update step: fix i(x)and optimize {ci}.

     

    迭代优化,initializethe indicesi(x)使用PCA-hashing

    Relation to Existing Methods

    •       VectorQuantization:只考虑量化误差,没有用hamming编码,少了affinityerror。Affinity-PreservingK-meansSettingλ = 0退化到VQ

    •       IterativeQuantization:假设数据是b维,如果d(・,・),dh(・,・)相同,个码字必须来自b维超立方体的顶点

    •      

    •       {rt}are b-dimensional orthogonal bases

    •      

    •       如果d(・,・),dh(・,・)相同,等价于本方法中λ=

    Geometric View

    •       vectorquantization method using the vertexes of a rotatedhyper-cube as thecodewords.

    •       Thismethod allows to “stretch” the hyper-cube while rotating

    Generalization to a Product Space

    •       hamming编码的码字计算和存放需要空间,而且b个bit的hamming码最多只能表示

     个码字。

    具体请参考product quantization for nearest neighbor search这篇论文。

    实验结果:

    •       评价指标:Therecall is defined as the fraction of retrieved true nearest neighbors to thetotal number of true nearest neighbors.  set K=10 in the experiments.

    •      

     

    相关文章

      网友评论

        本文标题:K-means Hashing: Learning Binary

        本文链接:https://www.haomeiwen.com/subject/iwsjettx.html