美文网首页
2.4「Stanford Algorithms」ASYMPTOT

2.4「Stanford Algorithms」ASYMPTOT

作者: 墨小匠 | 来源:发表于2019-10-03 19:52 被阅读0次

    In this lecture, we'll continue our formal treatment of asymptotic notation.

    We've already discussed big O notation, which is by far the most important and ubiquitous concept that's part of asymptotic notation, but, for completeness, I do want to tell you about a couple of close relatives of big O, namely omega and theta.

    If big O is analogous to less than or equal to, then omega and theta are analogous to greater than or equal to, and equal to, respectively.

    But let's treat them a little more precisely.

    The formal definition of omega notation closely mirrors that of big O notation.

    We say that one function, T of N, is big omega of another function, F of N, if eventually, that is for sufficiently large N, it's lower bounded by a constant multiple of F of N.

    And we quantify the ideas of a constant multiple and eventually in exactly the same way as before, namely via explicitly giving two constants, C and N naught, such that T of N is bounded below by C times F of N for all sufficiently large N.

    That is, for all N at least N naught.

    There's a picture just like there was for big O notation.

    Perhaps we have a function T of N which looks something like this green curve.

    And then we have another function F of N which is above T of N.

    But then when we multiply F of N by one half, we get something that, eventually, is always below T of N.

    So in this picture, this is an example where T of N is indeed big Omega of F of N.

    As far as what the constants are, well, the multiple that we use, C, is obviously just one half.

    That's what we're multiplying F of N by.

    And as before, N naught is the crossing point between the two functions.

    So, N naught is the point after which C times F of N always lies below T of N forevermore.

    So that's Big Omega.

    Theta notation is the equivalent of equals, and so it just means that the function is both Big O of F of N and Omega of F of N.

    An equivalent way to think about this is that, eventually, T of N is sandwiched between two different constant multiples of F of N.

    I'll write that down, and I'll leave it to you to verify that the two notions are equivalent.

    That is, one implies the other and vice versa.

    So what do I mean by T of N is eventually sandwiched between two multiples of F of N? Well, I just mean we choose two constants.

    A small one, C1, and a big constant, C2, and for all N at least N naught, T of N lies between those two constant multiples.

    One way that algorithm designers can be quite sloppy is by using O notation instead of theta notation.

    So that's a common convention and I will follow that convention often in this class.

    Let me give you an example.

    Suppose we have a subroutine, which does a linear scan through an array of length N.

    It looks at each entry in the array and does a constant amount of work with each entry.

    So the merge subroutine would be more or less an example of a subroutine of that type.

    So even though the running time of such an algorithm, a subroutine, is patently theta of N, it does constant work for each of N entries, so it's exactly theta of N, we'll often just say that it has running time O of N.

    We won't bother to make the stronger statement that it's theta of N.

    The reason we do that is because you know, as algorithm designers, what we really care about is upper bounds.

    We want guarantees on how long our algorithms are going to run, so naturally we focus on the upper bounds and not so much on the lower bound side.

    So don't get confused.

    Once in a while, there will a quantity which is obviously theta of F of N, and I'll just make the weaker statement that it's O of F of N.

    The next quiz is meant to check your understanding of these three concepts: Big O, Big Omega, and Big Theta notation.

    在本讲座中,我们将继续对渐近符号进行正式处理。

    我们已经讨论过大O符号,这是迄今为止最重要和最普遍的概念,它是渐进符号的一部分,但是为了完整起见,我想告诉您一些大O的近亲,即omega和theta 。

    如果big O类似于小于或等于,则ω和theta分别类似于大于或等于和等于。

    但是,让我们更精确地对待它们。

    Ω表示法的正式定义与大O表示法的定义非常相似。

    我们说一个函数T of N是另一个函数F of N的大欧米茄,如果最终对于足够大的N而言,它的下限是N F的常数倍。

    然后,我们对常数倍的概念进行量化,并最终以与以前完全相同的方式进行量化,即通过显式给出两个常数C和N零,使得对于所有足够大的N,N的T的下限是C乘以N的F。 。

    即,对于所有的N,至少N个零。

    有一张图片就像大O符号一样。

    也许我们有一个N的函数T,看起来像这个绿色曲线。

    然后,我们有另一个函数N,它大于N的T。

    但是,当我们将N的F乘以一半时,我们得到的结果最终总是低于N的T。

    所以在这张照片中,这是一个例子,其中N的T确实是N的F的大欧米茄。

    就常量而言,我们使用的倍数C显然只是一半。

    那就是我们要乘以N的F。

    和以前一样,N naught是两个函数之间的交点。

    因此,N零是C永远等于N永远低于N的T的时刻。

    那就是大欧米茄。

    Theta表示法等于等号,因此仅表示函数既是N的F的Big O,又是N的F的ω。

    考虑这一点的等效方法是,最终,将N的T夹在N F的两个不同的常数倍之间。

    我将其写下来,然后留给您确认两个概念是否相等。

    也就是说,一个暗示另一个,反之亦然。

    那么,N的T最终夹在N的F的两个倍数之间是什么意思?好吧,我只是说我们选择两个常数。

    一个小的C1和一个大的常数C2,并且对于所有N(至少N个零),T的T位于这两个常数倍之间。

    算法设计者相当草率的一种方法是使用O表示法而不是theta表示法。

    因此,这是一个常见的约定,在本课程中,我将经常遵循该约定。

    让我给你举个例子。

    假设我们有一个子例程,它对长度为N的数组进行线性扫描。

    它查看数组中的每个条目,并对每个条目进行恒定量的工作。

    因此,合并子例程或多或少将是该类型子例程的示例。

    因此,即使这种算法(子程序)的运行时间显然是N的theta,它也对N个条目中的每一个都进行恒定的工作,所以它恰好是N的theta,我们经常会说它的运行时间O为N.

    我们不会费心做出更强硬的说法,那就是N。

    我们这样做的原因是因为您知道,作为算法设计者,我们真正关心的是上限。

    我们想要保证算法可以运行多长时间,因此自然而然地,我们专注于上限,而不是关注下限。

    因此,请不要感到困惑。

    偶尔会有一个数量显然是N的F的theta,而我只是做出一个较弱的说法,它是N的F的O。

    下一个测验旨在检查您对这三个概念的理解:Big O,Big Omega和Big Theta表示法。

    相关文章

      网友评论

          本文标题:2.4「Stanford Algorithms」ASYMPTOT

          本文链接:https://www.haomeiwen.com/subject/ovzdpctx.html