In the following series of videos, we'll give a formal treatment of asymptotic notation, in particular big-Oh notation, as well as work through a number of examples.
Big-Oh notation concerns functions defined on the positive integers, we'll call it T(n) We'll pretty much always have the same semantics for T(n).
We're gonna be concerned about the worst-case running time of an algorithm, as a function of the input size, n.
So, the question I wanna answer for you in the rest of this video, is, what does it mean when we say a function, T(n), is big-Oh of f(n).
Or hear f(n) is some basic function, like for example n log n.
So I'll give you a number of answers, a number of ways of, to think about what big-Oh notation really means.
For starters let's begin with an English definition.
What does it mean for a function to be big-Oh of f(n)? It means eventually, for all sufficiently large values of n, it's bounded above by a constant multiple of f(n).
Let's think about it in a couple other ways.
So next I'm gonna translate this English definition into picture and then I'll translate it into formal mathematics.
So pictorially you can imagine that perhaps we have T(n) denoted by this blue functions here.
And perhaps f(n) is denoted by this green function here, which lies below T(n).
But when we double f(n), we get a function that eventually crosses T(n) and forevermore is larger than it.
So in this event, we would say that T(n) indeed is a Big-Oh of f(n).
The reason being that for all sufficiently large n, and once we go far enough out right on this graph, indeed, a constant multiple times of f(n), twice f(n), is an upper bound of T(n).
So finally, let me give you a actual mathematical definition that you could use to do formal proofs.
So how do we say, in mathematics, that eventually it should be bounded above by a constant multiple of f(n)? We see that there exists two constants, which I'll call c and n0.
So that T(n) is no more than c times f(n) for all n that exceed or equal n0.
So, the role of these two constants is to quantify what we mean by a constant multiple, and what we mean by sufficiently large, in the English definition.
c obviously quantifies the constant multiple of f(n), and n0 is quantifying sufficiently large, that's the threshold beyond which we insist that, c times f(n) is an upper-bound on T(n).
So, going back to the picture, what are c and n0? Well, c, of course, is just going to be two.
And n0 is the crossing point.
So we get to where two f(n).
And T(n) cross, and then we drop the acentode.
This would be the relative value of n0 in this picture, so that's the formal definition, the way to prove that something's bigger of f(n) you exhibit these two constants c and n0 and it better be the case that for all n at least n0, c times f(n) upper-bounds T(n).
One way to think about it if you're trying to establish that something is big-Oh of some function it's like you're playing a game against an opponent and you want to prove that.
This inequality here holds and your opponent must show that it doesn't hold for sufficiently large n you have to go first your job is to pick a strategy in the form of a constant c and a constant n0 and your opponent is then allowed to pick any number n larger than n0 so the function is big-Oh of f(n) if and only if you have a winning strategy in this game.
If you can up front commit to constants c and n0 so that no matter how big of an n your opponent picks, this inequality holds if you have no winning strategy then it's not big-Oh of f(n) no matter what C and n0 you choose your opponent can always flip this in equality.
By choosing a suitable, suitable large value of n.
I want to emphasis one last thing which is that these constants, what do I mean by constants.
I mean they are independent of n.
And so when you apply this definition, and you choose your constant c and n0, it better be that n does not appear anywhere.
So C should just be something like a thousand or a million.
Some constant independent of n.
So those are a bunch of way to think about big-Oh notation.
In English, you wanna have it bound above for sufficiently large numbers n.
I'm showing you how to translate that into mathematics that give you a pictorial representation.
And also sort of a game theoretic way to think about it.
Now, let's move on to a video that explores a number of examples.
在下面的视频系列中,我们将对渐近表示法(特别是big-Oh表示法)进行形式化处理,并通过许多示例进行介绍。
Big-Oh表示法涉及在正整数上定义的函数,我们将其称为T(n)。对于T(n),我们几乎总是具有相同的语义。
我们将担心算法的最坏情况下的运行时间,它取决于输入大小n的函数。
因此,在本视频的其余部分中,我想为您解答的问题是,当我们说函数T(n)等于f(n)时,是什么意思。
或者听到f(n)是一些基本函数,例如n log n。
因此,我将为您提供许多答案,多种方式,以考虑big-Oh表示法的真正含义。
首先,让我们从英语定义开始。
函数成为f(n)的big-Oh是什么意思?最终,这意味着对于所有足够大的n值,它的上限是f(n)的恒定倍数。
让我们以其他几种方式考虑一下。
因此,接下来我要将此英语定义转换为图片,然后将其转换为形式数学。
因此,从图片上您可以想象,也许我们在这里用蓝色函数表示了T(n)。
也许f(n)在此由绿色函数表示,该函数位于T(n)之下。
但是,当我们将f(n)加倍时,我们得到的函数最终会穿越T(n),并且永远大于它。
因此,在这种情况下,我们可以说T(n)确实是f(n)的Big-Oh。
原因是,对于所有足够大的n,并且一旦我们在该图上走得足够远,实际上,f(n)的常数倍(f(n)的两倍)就是T(n)的上限。
最后,让我给您一个实际的数学定义,您可以用它来进行形式证明。
那么,在数学上,我们如何说最终应该以f(n)的恒定倍数为界呢?我们看到存在两个常量,我将其称为c和n0。
因此,对于所有大于或等于n0的n,T(n)不超过f(n)的c倍。
因此,这两个常量的作用是量化英语定义中常量的倍数和足够大的倍数。
c显然量化了f(n)的常数倍,而n0量化了足够大,这是我们坚持认为c乘以f(n)是T(n)的上限的阈值。
那么,回到图片,c和n0是什么?好吧,c当然只是两个。
n0是交叉点。
因此,我们到达两个f(n)的位置。
然后T(n)穿过,然后放下阳极。
这就是图片中n0的相对值,所以这是形式上的定义,它是证明f(n)更大的方法,您需要展示这两个常数c和n0,最好是至少对所有n n0,c乘以f(n)上限T(n)。
如果您要尝试确定某件事情是重要的,那么可以考虑一下它的一种方法-就像您正在与对手进行游戏并且想要证明这一点一样。
这种不平等在这里成立,您的对手必须证明,对于足够大的n来说,它不成立。您必须首先选择常数c和常数n0形式的策略,然后才允许您的对手选择n大于n0的任何数字,因此当且仅当您在此游戏中有获胜策略时,函数才是f(n)的big-Oh。
如果您可以预先提交常数c和n0以便无论对手选择n的大小是多少,那么如果您没有获胜策略,这种不等式就会成立,那么无论C和n0是多少,f(n)都不大您选择您的对手可以始终平等地进行翻转。
通过选择合适的,合适的大n值。
我要强调的最后一件事是这些常量,我所说的常量是什么。
我的意思是它们独立于n。
因此,当您应用此定义并选择常数c和n0时,最好不要在任何地方出现n。
因此,C应该只是一千或一百万。
一些独立于n的常数。
因此,这些都是考虑大哦表示法的方法。
用英语,您想要将其绑定到足够大的数字n上。
我正在向您展示如何将其转换为数学图形化的形式。
以及某种游戏理论上的思考方式。
现在,让我们继续观看一个视频,该视频探索了许多示例。
网友评论