信息检索评价指标(nDCG,MRR,MAP)

nDCG(Normalized Discounted Cumulative Gain)

源自一篇参考维基百科的文章

Normalized Discounted Cumulative Gain:一种对搜索引擎或相关程序有效性的度量。

2个假设:

​ 1.强相关的文档应该出现在结果列表前面,且越靠前(rank越高)越有用。

​ 2.强相关文档比弱相关文档有用,比不相关文档有用。

$DCG$来源于一个更早的、更基础的方法—$CG$。

CG

$CG$不考虑结果集中的序信息,单纯把分级相关度相加。前个$P$个位置的$CG$值是:

$$
CG_p = \sum\limits_{i=1}^{p}
$$

$rel_i$是搜索结果列表的位置$i$处结果的分级相关度。

DCG

$DCG$取代$CG$作为一个更准确的测量方法。

如果一个强相关的文档排名靠后则应该受到惩罚,前个$P$个位置的$DCG$值是:

$$
DCG_p = rel_1 + \sum_{i=2}^{p}\frac {rel_i}{log_2 i}
$$

另一个$DCG$计算公式更加强调相关性

$$
DCG_p = \sum_{i=1}^{p} \frac{2^{rel_i}-1}{log_2(1+i)}
$$

若分级相关度只在0和1取二值的话,二公式效果相同

nDCG

根据Query的不同,结果列表的长度也不同,所以这一度量考虑了正规化问题

$$
nDCG_p =\frac{DCG_p}{IDCG_p}
$$

$IDCGp$(Ideal DCG)是在一个完美的排序下,$p$所具有的最大$DCG$值

这样一来无论Query是什么,$nDCG$都可以得到一个平均值,因此不同的Query之间的效能就可以做比较了。

完美的排序算法会使$DCG_p$和$IDCG_p$相同,从而使$nDCG_p$为1,$nDCG$的取值在0到1之间

例:查询结果列表中的6篇文档$D1,D2,D3,D4,D5,D6$,判定了他们的相关度是 $3,2,3,0,1,2$ ,则:
$$
CG_6=\sum_{i=1}^6 rel_i = 3+2+3+0+1+2=11
$$

$i$ $rel_i$ $\log _{2}(i+1)$ $\frac {rel_{i}}{\log _{2}(i+1)}$
1 3 1 3
2 2 1.585 1.262
3 3 2 1.5
4 0 2.322 0
5 1 2.585 0.387
6 2 2.807 0.712

所以 $DCG_6$ 的值为:
$$
{DCG_{6}} =\sum _{i=1}^{6}{\frac {rel_{i}}{\log _{2}(i+1)}}=3+1.262+1.5+0+0.387+0.712=6.861
$$

一个理想的排序应该是:$3,3,2,2,1,0$ ,所以
$$
\begin {aligned}
&IDCG_{6} =8.740 \\
\\
& nDCG_{6} ={\frac {DCG_{6}}{IDCG_{6}}}={\frac {6.861}{8.740}}=0.785 \\
\end {aligned}
$$

$nDCG$的缺点是:当排序的数很少(比如:只有1-3个),那么任何排序的$nDCG​$值都比较接近,所以可以考虑使用AUC(area under the ROC curve)。

AUC学习参考文章:http://blog.csdn.net/chjjunking/article/details/5933105

自己用python写的一个计算ndcg函数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def calc_dcg(sorted_vec, at):
'''
sorted_vec: list[tuple], type of element is tuple,
tuple(v0, v1), v0: predict score; v1: label score
at: int, calculate dcg@at
'''
import math
ranking = [t[1] for t in sorted_vec[0: at]]
dcg_ = sum([(2**r - 1) / math.log(i + 2, 2) for i, r in enumerate(ranking)])
return dcg_


def calc_ndcg(vec, at):
'''
vec: list[tuple], type of element is tuple,
tuple(v0, v1), v0: predict score; v1: label score
at: int, calculate ndcg@at
'''
sorted_vec = sorted(vec, key=lambda t: t[1], reverse=True)
ideal_dcg = calc_dcg(sorted_vec, at)
sorted_vec = sorted(vec, key=lambda t: t[0], reverse=True)
cur_dcg = calc_dcg(sorted_vec, at)
if ideal_dcg == 0:
return 1
else:
return cur_dcg / ideal_dcg

MRR(Mean Reciprocal Rank)

这是一个常用来衡量搜索算法效果的指标,目前被广泛用在允许返回多个结果的问题,或者目前还比较难以解决的问题中(由于如果只返回top 1的结果,准确率或召回率会很差,所以在技术不成熟的情况下,先返回多个结果)。在这类问题中,系统会对每一个返回的结果给一个置信度(打分),然后根据置信度排序,将得分高的结果排在前面返回。

而MRR得核心思想很简单:返回的结果集的优劣,跟第一个正确答案的位置有关,第一个正确答案越靠前,结果越好。

具体来说:对于一个query,若第一个正确答案排在第n位,则MRR得分就是$\frac1n$。(如果没有正确答案,则得分为0)

公式为:

$$
MRR = \frac1 {\left| Q \right|}\sum\limits_{i = 1}^{\left| Q \right|} \frac 1 {rank_i}
$$

其中,$Q$为样本query集合,${\left| Q \right|}$表示$Q$中query个数,$rank_i$ 表示在第$i$个query中,第一个正确答案的排名

例子

比如,设测试集有4个query,他们的结果中,前三个query的第一个正确答案分别被排在第3,1,5位,而第四个query没有找到正确答案。则该系统的MRR得分就是:

$$
MRR=\left(\frac13+\frac11+\frac15+0 \right)÷4=0.383
$$

MAP(mean average precision)

Precision

Precision is the fraction of the documents retrieved that are relevant to the user’s information need.

$$
precision = \frac {\left| {\{ relevant\;documents\} \cap \{ retrieved\;documents\} } \right|} {\left| {\left\{ {retieved\;documents} \right\}} \right|}
$$

Precision takes all retrieved documents into account. It can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at n or P@n.

Average precision

Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision $p(r)$ as a function of recall $r$. Average precision computes the average value of $p(r)$ over the interval from $r=0$ to $r=1$

$$
AveP=\int_0^1p\left(r\right)dr
$$

这个值就是AUC(rea under the precision-recall curve)

That is the area under the precision-recall curve. This integral is in practice replaced with a finite sum over every position in the ranked sequence of documents:

$$
AveP = \sum _{k=1}^n{P\left(k\right)}\Delta{r\left(k\right)}
$$

where $ k$ is the order in the sequence of retrieved documents, $n$ is the number of retrieved documents, $P(k)$ is the precision at cut-off $k$ in the list, and $\Delta r(k) $is the change in recall from items $k-1$ to $k$.

This finite sum is equivalent to:

$$
{AveP} = \frac{\sum_{k=1}^n (P(k) \times {rel}(k))}{number\; of\; relevant \;documents}
$$

where ${rel} (k)$ is an indicator function equaling 1 if the item at k is a relevant document, zero otherwise. Note that the average is over all relevant documents and the relevant documents not retrieved get a precision score of zero.

Mean average precision

Mean average precision for a set of queries is the mean of the average precision scores for each query.

$$
MAP=\frac{\sum_{q=1}^Q{AveP(q)}}{Q}
$$

where Q is the number of queries.

参考

http://en.wikipedia.org/wiki/Discounted_cumulative_gain