Paul Graham:梦寐以求的编程语言
Paul Graham:梦寐以求的编程语言
这是一篇2001年发表的博文,距今超过10年。但是,好的文章是不会随时间流逝而贬值滴。
作者 Paul Graham 是硅谷大牛。对许多问题(包括:编程技术、管理、创业)都有独到见解。
本文描绘了他心目中理想的编程语言,供大伙儿参考。
提醒一下:文中提及的“黑客”,是广义的。不要一提到黑客,就以为是网络入侵者。
我的朋友曾对一位著名的操作系统专家说他想要设计一种真正优秀的编程语言。那位专家回答,这是浪费时间,优秀的语言不一定会被市场接受,很可能无人使用,因为语言的流行不取决于它本身。至少,那位专家设计的语言就遭遇到了这种情况。
那么,语言的流行到底取决于什么因素呢?流行的语言是否真的值得流行呢?还有必要尝试设计一种更好的语言吗?如果有必要的话,怎样才能做到这一点呢?
为了找到这些问题的答案,我想我们可以观察黑客,了解他们使用什么语言。编程语言本来就是为了满足黑客的需要而产生的,当且仅当黑客喜欢一种语言时,这种语言才能成为合格的编程语言,而不是被当做“指称语义”(denotational semantics)或者编译器设计。
流行的秘诀
没错,大多数人选择某一种编程语言,不是因为这种语言有什么独特的特点,而是因为听说其他人使用这种语言。但是我认为,外界因素对于编程语言的流行其实并没有想象中那么大的影响力。我倒是觉得,问题出在对于什么是优秀编程语言,黑客的看法与大多数的语言设计者不一样。
黑客的看法其实比语言设计者的更重要。编程语言不是数学定理,而是一种工具,为了便于使用,它们才被设计出来。所以,设计编程语言的时候必须考虑到人类的长处和短处,就像设计鞋子的时候必须符合人类的脚型。如果鞋子穿上去不舒服,无论它的外形多么优美,多么像一件艺术品,你也只能把它当做一双坏鞋。
大多数程序员也许无法分辨语言的好坏。但是,这不代表优秀的编程语言会被埋没,专家级黑客一眼就能认出它们,并且会拿来使用。虽然他们人数很少,但就是这样一小群人写出了人类所有的优秀软件。他们有着巨大的影响力,他们使用什么语言,其他程序员往往就会跟着使用。老实说,很多时候这种影响力更像是一种命令,对于其他程序员来说,专家级黑客就像自己的老板或导师,他们说哪种语言好用,自己就会乖乖地跟进。
专家级黑客的看法不是决定一种语言流行程度的唯一因素,某些古老的软件(Fortran和Cobol的情况)和铺天盖地的广告宣传(Ada和Java的情况)也会起到作用。但是,我认为从长期来看,专家级黑客的看法是最重要的因素。只要有了达到“临界数量”(critical mass)的最初用户和足够长的时间,一种语言就可能会达到应有的流行程度。而流行本身又会使得这种优秀的语言更加优秀,进一步拉大它与平庸语言之间的好坏差异,因为使用者的反馈总是会导致语言的改进。你可以想一下,所有流行的编程语言从诞生至今的变化有多大。Perl和Fortran是极端的例子,但是甚至就连Lisp都发生了很大的变化。
所以,即使不考虑语言本身的优秀是否能带动流行,我想单单流行本身就肯定会使得这种语言变得更好,只有流行才会让它保持优秀。编程语言的最高境界一直在发展之中。虽然语言的核心功能就像大海的深处,很少有变化,但是函数库和开发环境之类的东西就像大海的表面,一直在汹涌澎湃。
当然,黑客必须先知道这种语言,才可能去用它。他们怎么才能知道呢?就是从其他黑客那里。所以不管怎样,一开始必须有一群黑客使用这种语言,然后其他人才会知道它。我不知道“一群”的最小数量是多少,多少个黑客才算达到“临界数量”呢?如果让我猜,我会说20人。如果一种语言有20个独立用户,就意味着有20个人是自主决定使用这种语言的,我觉得这就说明这种语言真的有优点。
达到这一步并非易事。如果说用户数从0到20比从20到1000更困难,我也不会感到惊讶。发展最早的20个用户的最好方法可能就是使用特洛伊木马:你让人们使用一种他们需要的应用程序,这个程序碰巧就是用某种新语言开发的。
外部因素
我们得先承认,确实有一个外部因素会影响到语言的流行。一种语言必须是某一个流行的计算机系统的脚本语言(scripting language),才会变得流行。Fortran和Cobol是早期IBM大型机的脚本语言。C是Unix的脚本语言,后来的Perl和Python也是如此。Tcl是Tk的脚本语言,Visual Basic是Windows的脚本语言,(某种形式的)Lisp是Emacs的脚本语言,PHP是网络服务器的脚本语言,Java和JavaScript是浏览器的脚本语言。
编程语言不是存在于真空之中。“编程”其实是及物动词,黑客一般都是为某个系统编程,在现实中,编程语言总是与它们依附的系统联系在一起的。所以,如果你想设计一种流行的编程语言,就不能只是单纯地设计语言本身,还必须为它找到一个依附的系统,而这个系统也必须流行。除非你只想用自己设计的语言取代那个系统现有的脚本语言。
这种情况导致的一个结果就是,无法以一种语言本身的优缺点评判这种语言。另一个结果则是,只有当一种语言是某个系统的脚本语言时,它才能真正成为编程语言。如果你对此很吃惊,觉得不公平,那么我会跟你说不必大惊小怪。这就好比大家都认为,如果一种编程语言只有语法规则,没有一个好的实现(implementation),那么它就不能算完整的编程语言。这些都是很正常很合理的事情,编程语言本来就该如此。
当然,编程语言本来就需要一个好的实现,而且这个实现必须是免费的。商业公司愿意出钱购买软件,但是黑客作为个人不会愿意这样做,而你想让一种语言成功,恰恰就是需要吸引黑客。
编程语言还需要有一本介绍它的书。这本书应该不厚,文笔流畅,而且包含大量优秀的范例。布赖恩·柯尼汉和丹尼斯·里奇合写的《C程序设计语言》(C Programming Language)就是这方面的典范。眼下,我大概还能再加一句,这一类书籍之中必须有一本由O’Reilly公司出版发行。这正在变成是否能吸引黑客的前提条件了。
编程语言还应该有在线文档。事实上,在线文档可以当做一本书来写,但是目前它还无法取代实体书。实体书并没有过时,它们读起来很方便,而且出版社对书籍内容的审核是一种很有用的质量保证机制(虽然做得很不完美)。书店则是程序员发现和学习新语言的最重要的场所之一。
简洁
假定你的语言已经能够满足上面三项条件——一种免费的实现,一本相关书籍,以及语言所依附的计算机系统——那么还需要做什么才能使得黑客喜欢上你的语言?
黑客欣赏的一个特点就是简洁。黑客都是懒人,他们同数学家和现代主义建筑师一样,痛恨任何冗余的东西或事情。有一个笑话说,黑客动手写程序之前,至少会在心里盘算一下哪种语言的打字工作量最小,然后就选择使用该语言。这个笑话其实与真实情况相差无几。就算这真的是个笑话,语言的设计者也必须把它当真,按照它的要求设计语言。
简洁性最重要的方面就是要使得语言更抽象。为了达到这一点,首先你设计的必须是高级语言,然后把它设计得越抽象越好。语言设计者应该总是看着代码,问自己能不能使用更少的语法单位把它表达出来。如果你有办法让许多不同的程序都能更简短地表达出来,那么这很可能意味着你发现了一种很有用的新抽象方法。
不要觉得为用户着想就是让他们使用像英语一样又长又啰唆的语法。这是不正确的做法,Cobol就是因为这个毛病而声名狼藉。
如果你让黑客像下面这样求和:
add x to y giving z
而不是写成:
z=x+y
那么你就是在侮辱黑客的智商,或者自己作孽了。
简洁性是静态类型语言的力所不及之处。不考虑其他因素时,没人愿意在程序的头部写上一大堆的声明语句。只要计算机可以自己推断出来的事情,都应该让计算机自己去推断。举例来说,“hello world”本应该是一个很简单的程序,但是在Java语言中却要写上一大堆东西,这本身就差不多可以说明Java语言设计得有问题了。
单个的语法单位也应该很简短。Perl和Common Lisp在这方面是两个不同的极端。Perl的语法单位很短,导致它的代码可以拥挤得让人无法理解,而Common Lisp内置运算符的名称则长得可笑。Common Lisp的设计者们可能觉得文本编辑器会帮助用户自动填写运算符的长名称。但是这样做的代价不仅是增加了打字的工作量,还包括提高了阅读代码的难度,以及占用了更多的显示器空间。
可编程性(Hackability)
对黑客来说,选择编程语言的时候,还有一个因素比简洁更重要,那就是这种语言必须能够帮助自己做到想做的事。在编程语言的历史上,防止程序员做出“错误”举动的措施多得惊人。这是语言设计者很自以为是的危险举动,他们怎么知道程序员该做什么不该做什么?我认为,语言设计者应该假定他们的目标用户是一个天才,会做出各种他们无法预知的举动,而不是假定目标用户是一个笨手笨脚的傻瓜,需要别人的保护才不会伤到自己。如果用户真的是傻瓜,不管你怎么保护他,他还是会搬起石头砸自己的脚。你也许能够阻止他引用另一个模块中的变量,但是你没法防止他日日夜夜不知疲倦地写出结构混乱的程序去解决完全错误的问题。
优秀程序员经常想做一些既危险又令人恼火的事情。所谓“令人恼火”,我指的是他们会突破设计者提供给用户的外部语义层,试着控制某些高级抽象的语言内部接口。比如,黑客喜欢破解,而破解就意味着深入内部,揣测原始设计者的意图。
你应该敞开胸怀,欢迎这种揣测。对于制造工具的人来说,总是会有用户以违背你本意的方式使用你的工具。如果你制造的是编程语言这样高度组合的系统,那就更是如此了。许多黑客会用你做梦也想不到的方式改动你的语法模型。我的建议就是,让他们这样干吧,而且应该为他们创造便利,尽可能多地把语言的内部暴露在他们面前。
其实,黑客并不会彻底颠覆你的工具,在一个大型程序中,他可能只是对语言改造一两个地方。但是,改动多少地方并不重要,重要的是他能够对语言进行改动。这可能不仅有助于解决一些特殊的问题,还会让黑客觉得很好玩。黑客改造语言的乐趣就好比外科医生摆弄病人内脏的乐趣,或者青少年喜欢用手挤破青春痘的那种感觉。至少对男生来说,某些类型的破坏非常刺激。针对青年男性读者的Maxim杂志每年出版一本特辑,里面一半是美女照片,另一半是各种严重事故的现场照片。这本杂志非常清楚它的读者想看什么。
一种真正优秀的编程语言应该既整洁又混乱。“整洁”的意思是设计得很清楚, 内核由数量不多的运算符构成,这些运算符易于理解,每一个都有很完整的独立用途。“混乱”的意思是它允许黑客以自己的方式使用。C语言就是这样的例子,早期的Lisp语言也是如此。真正的黑客语言总是稍微带一点放纵不羁、不服管教的个性。
优秀的编程语言所具备的功能,应该会使得言必称“软件工程”的人感到非常不满、频频摇头。与黑客语言形成鲜明对照的就是像Pascal那样的语言,它是井然有序的模范,非常适合教学,但是除此之外就没有很大用处了。
一次性程序
为了吸引黑客,一种编程语言必须善于完成黑客想要完成的各种任务。这意味着它必须很适合开发一次性程序。这一点可能出乎很多人的意料。
所谓一次性程序,就是指为了完成某些很简单的临时性任务而在很短时间内写出来的程序。比如,自动完成某些系统管理任务的程序,或者(为了某项模拟任务)自动生成测试数据的程序,以及在不同格式之间转化数据的程序等。令人吃惊的是,一次性程序往往不是真的只用一次,就像二战期间很多美国大学造的一大批临时建筑后来都成了永久建筑。许多一次性程序后来也都变成了正式的程序,具备了正式的功能和外部用户。
我有一种预感,最优秀的那些大型程序就是这样发展起来的,而不是像胡佛水坝那样从一开始就作为大型工程来设计。一下子从无到有做出一个大项目是一件很恐怖的事。当人们接手一个巨型项目时,很容易被它搞得一蹶不振。最后,要么是项目陷入僵局,要么是做出来一个规模小、性能差的东西。你想造一片闹市,却只做出一家商场;你想建一个罗马,却只造出一个巴西利亚;你想发明C语言,却只开发出Ada。
开发大型程序的另一个方法就是从一次性程序开始,然后不断地改进。这种方法比较不会让人望而生畏,程序在不断的开发之中逐渐进步。一般来说,使用这种方法开发程序,一开始用什么编程语言,就会一直用到最后,因为除非有外部政治因素的干预,程序员很少会中途更换编程语言。所以,我们就有了一个看似矛盾的结论:如果你想设计一种适合开发大型项目的编程语言,就必须使得这种语言也适合开发一次性程序,因为大型项目就是从一次性程序演变而来的。
Perl就是一个鲜明的例子。它不仅仅设计成适合开发一次性程序,而且它本身就很像一次性程序。最初的Perl只是好几个生成表格的工具收集在一起而已。后来程序员用它写一次性程序,当那些程序逐渐发展壮大后,Perl才随之发展成了一种正式的编程语言。到了Perl 5,这种语言才适合开发重要的程序,但是在此之前它已经广为流行了。
什么样的语言适合写一次性程序?首先,它必须很容易装备。一次性程序是你只想在一小时内写出来的程序,所以它不应该耗费很多时间安装和配置,最好已经安装在你的电脑上了。它必须是想用就用的。C语言可以想用就用,因为它是操作系统的一部分;Perl可以想用就用,因为它本来就是一种系统管理工具,操作系统已经默认安装它了。
很容易装备不仅仅指很容易安装或者已经安装,还指很容易与使用者互动。一种有命令行界面、可以实时反馈的语言就具有互动性,那些必须先编译后使用的语言就不具备互动性。受欢迎的编程语言应该是前者,具有良好的互动性,可以快速得到运行结果。
一次性程序的另一个特点就是简洁。对黑客来说,这一点永远有吸引力。如果考虑到你最多只打算在这个程序上耗费一个小时,这一点就更重要了。
原文出处:
http://www.paulgraham.com/popular.html
May 2001
(This article was written as a kind of business plan for a new language. So it is missing (because it takes for granted) the most important feature of a good programming language: very powerful abstractions.)
A friend of mine once told an eminent operating systems expert that he wanted to design a really good programming language. The expert told him that it would be a waste of time, that programming languages don't become popular or unpopular based on their merits, and so no matter how good his language was, no one would use it. At least, that was what had happened to the language he had designed.What does make a language popular? Do popular languages deserve their popularity? Is it worth trying to define a good programming language? How would you do it?I think the answers to these questions can be found by looking at hackers, and learning what they want. Programming languages are for hackers, and a programming language is good as a programming language (rather than, say, an exercise in denotational semantics or compiler design) if and only if hackers like it.
1 The Mechanics of PopularityIt's true, certainly, that most people don't choose programming languages simply based on their merits. Most programmers are told what language to use by someone else. And yet I think the effect of such external factors on the popularity of programming languages is not as great as it's sometimes thought to be. I think a bigger problem is that a hacker's idea of a good programming language is not the same as most language designers'.Between the two, the hacker's opinion is the one that matters. Programming languages are not theorems. They're tools, designed for people, and they have to be designed to suit human strengths and weaknesses as much as shoes have to be designed for human feet. If a shoe pinches when you put it on, it's a bad shoe, however elegant it may be as a piece of sculpture.It may be that the majority of programmers can't tell a good language from a bad one. But that's no different with any other tool. It doesn't mean that it's a waste of time to try designing a good language. Expert hackers can tell a good language when they see one, and they'll use it. Expert hackers are a tiny minority, admittedly, but that tiny minority write all the good software, and their influence is such that the rest of the programmers will tend to use whatever language they use. Often, indeed, it is not merely influence but command: often the expert hackers are the very people who, as their bosses or faculty advisors, tell the other programmers what language to use.The opinion of expert hackers is not the only force that determines the relative popularity of programming languages-- legacy software (Cobol) and hype (Ada, Java) also play a role-- but I think it is the most powerful force over the long term. Given an initial critical mass and enough time, a programming language probably becomes about as popular as it deserves to be. And popularity further separates good languages from bad ones, because feedback from real live users always leads to improvements. Look at how much any popular language has changed during its life. Perl and Fortran are extreme cases, but even Lisp has changed a lot. Lisp 1.5 didn't have macros, for example; these evolved later, after hackers at MIT had spent a couple years using Lisp to write real programs. [1]So whether or not a language has to be good to be popular, I think a language has to be popular to be good. And it has to stay popular to stay good. The state of the art in programming languages doesn't stand still. And yet the Lisps we have today are still pretty much what they had at MIT in the mid-1980s, because that's the last time Lisp had a sufficiently large and demanding user base.Of course, hackers have to know about a language before they can use it. How are they to hear? From other hackers. But there has to be some initial group of hackers using the language for others even to hear about it. I wonder how large this group has to be; how many users make a critical mass? Off the top of my head, I'd say twenty. If a language had twenty separate users, meaning twenty users who decided on their own to use it, I'd consider it to be real.Getting there can't be easy. I would not be surprised if it is harder to get from zero to twenty than from twenty to a thousand. The best way to get those initial twenty users is probably to use a trojan horse: to give people an application they want, which happens to be written in the new language.
2 External FactorsLet's start by acknowledging one external factor that does affect the popularity of a programming language. To become popular, a programming language has to be the scripting language of a popular system. Fortran and Cobol were the scripting languages of early IBM mainframes. C was the scripting language of Unix, and so, later, was Perl. Tcl is the scripting language of Tk. Java and Javascript are intended to be the scripting languages of web browsers.Lisp is not a massively popular language because it is not the scripting language of a massively popular system. What popularity it retains dates back to the 1960s and 1970s, when it was the scripting language of MIT. A lot of the great programmers of the day were associated with MIT at some point. And in the early 1970s, before C, MIT's dialect of Lisp, called MacLisp, was one of the only programming languages a serious hacker would want to use.Today Lisp is the scripting language of two moderately popular systems, Emacs and Autocad, and for that reason I suspect that most of the Lisp programming done today is done in Emacs Lisp or AutoLisp.Programming languages don't exist in isolation. To hack is a transitive verb-- hackers are usually hacking something-- and in practice languages are judged relative to whatever they're used to hack. So if you want to design a popular language, you either have to supply more than a language, or you have to design your language to replace the scripting language of some existing system.Common Lisp is unpopular partly because it's an orphan. It did originally come with a system to hack: the Lisp Machine. But Lisp Machines (along with parallel computers) were steamrollered by the increasing power of general purpose processors in the 1980s. Common Lisp might have remained popular if it had been a good scripting language for Unix. It is, alas, an atrociously bad one.One way to describe this situation is to say that a language isn't judged on its own merits. Another view is that a programming language really isn't a programming language unless it's also the scripting language of something. This only seems unfair if it comes as a surprise. I think it's no more unfair than expecting a programming language to have, say, an implementation. It's just part of what a programming language is.A programming language does need a good implementation, of course, and this must be free. Companies will pay for software, but individual hackers won't, and it's the hackers you need to attract.A language also needs to have a book about it. The book should be thin, well-written, and full of good examples. K&R is the ideal here. At the moment I'd almost say that a language has to have a book published by O'Reilly. That's becoming the test of mattering to hackers.There should be online documentation as well. In fact, the book can start as online documentation. But I don't think that physical books are outmoded yet. Their format is convenient, and the de facto censorship imposed by publishers is a useful if imperfect filter. Bookstores are one of the most important places for learning about new languages.
3 BrevityGiven that you can supply the three things any language needs-- a free implementation, a book, and something to hack-- how do you make a language that hackers will like?One thing hackers like is brevity. Hackers are lazy, in the same way that mathematicians and modernist architects are lazy: they hate anything extraneous. It would not be far from the truth to say that a hacker about to write a program decides what language to use, at least subconsciously, based on the total number of characters he'll have to type. If this isn't precisely how hackers think, a language designer would do well to act as if it were.It is a mistake to try to baby the user with long-winded expressions that are meant to resemble English. Cobol is notorious for this flaw. A hacker would consider being asked to write
add x to y giving z
instead of
z = x+y
as something between an insult to his intelligence and a sin against God.It has sometimes been said that Lisp should use first and rest instead of car and cdr, because it would make programs easier to read. Maybe for the first couple hours. But a hacker can learn quickly enough that car means the first element of a list and cdr means the rest. Using first and rest means 50% more typing. And they are also different lengths, meaning that the arguments won't line up when they're called, as car and cdr often are, in successive lines. I've found that it matters a lot how code lines up on the page. I can barely read Lisp code when it is set in a variable-width font, and friends say this is true for other languages too.Brevity is one place where strongly typed languages lose. All other things being equal, no one wants to begin a program with a bunch of declarations. Anything that can be implicit, should be.The individual tokens should be short as well. Perl and Common Lisp occupy opposite poles on this question. Perl programs can be almost cryptically dense, while the names of built-in Common Lisp operators are comically long. The designers of Common Lisp probably expected users to have text editors that would type these long names for them. But the cost of a long name is not just the cost of typing it. There is also the cost of reading it, and the cost of the space it takes up on your screen.
4 HackabilityThere is one thing more important than brevity to a hacker: being able to do what you want. In the history of programming languages a surprising amount of effort has gone into preventing programmers from doing things considered to be improper. This is a dangerously presumptuous plan. How can the language designer know what the programmer is going to need to do? I think language designers would do better to consider their target user to be a genius who will need to do things they never anticipated, rather than a bumbler who needs to be protected from himself. The bumbler will shoot himself in the foot anyway. You may save him from referring to variables in another package, but you can't save him from writing a badly designed program to solve the wrong problem, and taking forever to do it.Good programmers often want to do dangerous and unsavory things. By unsavory I mean things that go behind whatever semantic facade the language is trying to present: getting hold of the internal representation of some high-level abstraction, for example. Hackers like to hack, and hacking means getting inside things and second guessing the original designer.Let yourself be second guessed. When you make any tool, people use it in ways you didn't intend, and this is especially true of a highly articulated tool like a programming language. Many a hacker will want to tweak your semantic model in a way that you never imagined. I say, let them; give the programmer access to as much internal stuff as you can without endangering runtime systems like the garbage collector.In Common Lisp I have often wanted to iterate through the fields of a struct-- to comb out references to a deleted object, for example, or find fields that are uninitialized. I know the structs are just vectors underneath. And yet I can't write a general purpose function that I can call on any struct. I can only access the fields by name, because that's what a struct is supposed to mean.A hacker may only want to subvert the intended model of things once or twice in a big program. But what a difference it makes to be able to. And it may be more than a question of just solving a problem. There is a kind of pleasure here too. Hackers share the surgeon's secret pleasure in poking about in gross innards, the teenager's secret pleasure in popping zits. [2] For boys, at least, certain kinds of horrors are fascinating. Maxim magazine publishes an annual volume of photographs, containing a mix of pin-ups and grisly accidents. They know their audience.Historically, Lisp has been good at letting hackers have their way. The political correctness of Common Lisp is an aberration. Early Lisps let you get your hands on everything. A good deal of that spirit is, fortunately, preserved in macros. What a wonderful thing, to be able to make arbitrary transformations on the source code.Classic macros are a real hacker's tool-- simple, powerful, and dangerous. It's so easy to understand what they do: you call a function on the macro's arguments, and whatever it returns gets inserted in place of the macro call. Hygienic macros embody the opposite principle. They try to protect you from understanding what they're doing. I have never heard hygienic macros explained in one sentence. And they are a classic example of the dangers of deciding what programmers are allowed to want. Hygienic macros are intended to protect me from variable capture, among other things, but variable capture is exactly what I want in some macros.A really good language should be both clean and dirty: cleanly designed, with a small core of well understood and highly orthogonal operators, but dirty in the sense that it lets hackers have their way with it. C is like this. So were the early Lisps. A real hacker's language will always have a slightly raffish character.A good programming language should have features that make the kind of people who use the phrase "software engineering" shake their heads disapprovingly. At the other end of the continuum are languages like Ada and Pascal, models of propriety that are good for teaching and not much else.
5 Throwaway ProgramsTo be attractive to hackers, a language must be good for writing the kinds of programs they want to write. And that means, perhaps surprisingly, that it has to be good for writing throwaway programs.A throwaway program is a program you write quickly for some limited task: a program to automate some system administration task, or generate test data for a simulation, or convert data from one format to another. The surprising thing about throwaway programs is that, like the "temporary" buildings built at so many American universities during World War II, they often don't get thrown away. Many evolve into real programs, with real features and real users.I have a hunch that the best big programs begin life this way, rather than being designed big from the start, like the Hoover Dam. It's terrifying to build something big from scratch. When people take on a project that's too big, they become overwhelmed. The project either gets bogged down, or the result is sterile and wooden: a shopping mall rather than a real downtown, Brasilia rather than Rome, Ada rather than C.Another way to get a big program is to start with a throwaway program and keep improving it. This approach is less daunting, and the design of the program benefits from evolution. I think, if one looked, that this would turn out to be the way most big programs were developed. And those that did evolve this way are probably still written in whatever language they were first written in, because it's rare for a program to be ported, except for political reasons. And so, paradoxically, if you want to make a language that is used for big systems, you have to make it good for writing throwaway programs, because that's where big systems come from.Perl is a striking example of this idea. It was not only designed for writing throwaway programs, but was pretty much a throwaway program itself. Perl began life as a collection of utilities for generating reports, and only evolved into a programming language as the throwaway programs people wrote in it grew larger. It was not until Perl 5 (if then) that the language was suitable for writing serious programs, and yet it was already massively popular.What makes a language good for throwaway programs? To start with, it must be readily available. A throwaway program is something that you expect to write in an hour. So the language probably must already be installed on the computer you're using. It can't be something you have to install before you use it. It has to be there. C was there because it came with the operating system. Perl was there because it was originally a tool for system administrators, and yours had already installed it.Being available means more than being installed, though. An interactive language, with a command-line interface, is more available than one that you have to compile and run separately. A popular programming language should be interactive, and start up fast.Another thing you want in a throwaway program is brevity. Brevity is always attractive to hackers, and never more so than in a program they expect to turn out in an hour.
6 LibrariesOf course the ultimate in brevity is to have the program already written for you, and merely to call it. And this brings us to what I think will be an increasingly important feature of programming languages: library functions. Perl wins because it has large libraries for manipulating strings. This class of library functions are especially important for throwaway programs, which are often originally written for converting or extracting data. Many Perl programs probably begin as just a couple library calls stuck together.I think a lot of the advances that happen in programming languages in the next fifty years will have to do with library functions. I think future programming languages will have libraries that are as carefully designed as the core language. Programming language design will not be about whether to make your language strongly or weakly typed, or object oriented, or functional, or whatever, but about how to design great libraries. The kind of language designers who like to think about how to design type systems may shudder at this. It's almost like writing applications! Too bad. Languages are for programmers, and libraries are what programmers need.It's hard to design good libraries. It's not simply a matter of writing a lot of code. Once the libraries get too big, it can sometimes take longer to find the function you need than to write the code yourself. Libraries need to be designed using a small set of orthogonal operators, just like the core language. It ought to be possible for the programmer to guess what library call will do what he needs.Libraries are one place Common Lisp falls short. There are only rudimentary libraries for manipulating strings, and almost none for talking to the operating system. For historical reasons, Common Lisp tries to pretend that the OS doesn't exist. And because you can't talk to the OS, you're unlikely to be able to write a serious program using only the built-in operators in Common Lisp. You have to use some implementation-specific hacks as well, and in practice these tend not to give you everything you want. Hackers would think a lot more highly of Lisp if Common Lisp had powerful string libraries and good OS support.
7 SyntaxCould a language with Lisp's syntax, or more precisely, lack of syntax, ever become popular? I don't know the answer to this question. I do think that syntax is not the main reason Lisp isn't currently popular. Common Lisp has worse problems than unfamiliar syntax. I know several programmers who are comfortable with prefix syntax and yet use Perl by default, because it has powerful string libraries and can talk to the os.There are two possible problems with prefix notation: that it is unfamiliar to programmers, and that it is not dense enough. The conventional wisdom in the Lisp world is that the first problem is the real one. I'm not so sure. Yes, prefix notation makes ordinary programmers panic. But I don't think ordinary programmers' opinions matter. Languages become popular or unpopular based on what expert hackers think of them, and I think expert hackers might be able to deal with prefix notation. Perl syntax can be pretty incomprehensible, but that has not stood in the way of Perl's popularity. If anything it may have helped foster a Perl cult.A more serious problem is the diffuseness of prefix notation. For expert hackers, that really is a problem. No one wants to write
(aref a x y)
when they could write
a[x,y]
In this particular case there is a way to finesse our way out of the problem. If we treat data structures as if they were functions on indexes, we could write (a x y) instead, which is even shorter than the Perl form. Similar tricks may shorten other types of expressions.We can get rid of (or make optional) a lot of parentheses by making indentation significant. That's how programmers read code anyway: when indentation says one thing and delimiters say another, we go by the indentation. Treating indentation as significant would eliminate this common source of bugs as well as making programs shorter.Sometimes infix syntax is easier to read. This is especially true for math expressions. I've used Lisp my whole programming life and I still don't find prefix math expressions natural. And yet it is convenient, especially when you're generating code, to have operators that take any number of arguments. So if we do have infix syntax, it should probably be implemented as some kind of read-macro.I don't think we should be religiously opposed to introducing syntax into Lisp, as long as it translates in a well-understood way into underlying s-expressions. There is already a good deal of syntax in Lisp. It's not necessarily bad to introduce more, as long as no one is forced to use it. In Common Lisp, some delimiters are reserved for the language, suggesting that at least some of the designers intended to have more syntax in the future.One of the most egregiously unlispy pieces of syntax in Common Lisp occurs in format strings; format is a language in its own right, and that language is not Lisp. If there were a plan for introducing more syntax into Lisp, format specifiers might be able to be included in it. It would be a good thing if macros could generate format specifiers the way they generate any other kind of code.An eminent Lisp hacker told me that his copy of CLTL falls open to the section format. Mine too. This probably indicates room for improvement. It may also mean that programs do a lot of I/O.
8 Efficiency A good language, as everyone knows, should generate fast code. But in practice I don't think fast code comes primarily from things you do in the design of the language. As Knuth pointed out long ago, speed only matters in certain critical bottlenecks. And as many programmers have observed since, one is very often mistaken about where these bottlenecks are.So, in practice, the way to get fast code is to have a very good profiler, rather than by, say, making the language strongly typed. You don't need to know the type of every argument in every call in the program. You do need to be able to declare the types of arguments in the bottlenecks. And even more, you need to be able to find out where the bottlenecks are.One complaint people have had with Lisp is that it's hard to tell what's expensive. This might be true. It might also be inevitable, if you want to have a very abstract language. And in any case I think good profiling would go a long way toward fixing the problem: you'd soon learn what was expensive.Part of the problem here is social. Language designers like to write fast compilers. That's how they measure their skill. They think of the profiler as an add-on, at best. But in practice a good profiler may do more to improve the speed of actual programs written in the language than a compiler that generates fast code. Here, again, language designers are somewhat out of touch with their users. They do a really good job of solving slightly the wrong problem.It might be a good idea to have an active profiler-- to push performance data to the programmer instead of waiting for him to come asking for it. For example, the editor could display bottlenecks in red when the programmer edits the source code. Another approach would be to somehow represent what's happening in running programs. This would be an especially big win in server-based applications, where you have lots of running programs to look at. An active profiler could show graphically what's happening in memory as a program's running, or even make sounds that tell what's happening.Sound is a good cue to problems. In one place I worked, we had a big board of dials showing what was happening to our web servers. The hands were moved by little servomotors that made a slight noise when they turned. I couldn't see the board from my desk, but I found that I could tell immediately, by the sound, when there was a problem with a server.It might even be possible to write a profiler that would automatically detect inefficient algorithms. I would not be surprised if certain patterns of memory access turned out to be sure signs of bad algorithms. If there were a little guy running around inside the computer executing our programs, he would probably have as long and plaintive a tale to tell about his job as a federal government employee. I often have a feeling that I'm sending the processor on a lot of wild goose chases, but I've never had a good way to look at what it's doing.A number of Lisps now compile into byte code, which is then executed by an interpreter. This is usually done to make the implementation easier to port, but it could be a useful language feature. It might be a good idea to make the byte code an official part of the language, and to allow programmers to use inline byte code in bottlenecks. Then such optimizations would be portable too.The nature of speed, as perceived by the end-user, may be changing. With the rise of server-based applications, more and more programs may turn out to be i/o-bound. It will be worth making i/o fast. The language can help with straightforward measures like simple, fast, formatted output functions, and also with deep structural changes like caching and persistent objects.Users are interested in response time. But another kind of efficiency will be increasingly important: the number of simultaneous users you can support per processor. Many of the interesting applications written in the near future will be server-based, and the number of users per server is the critical question for anyone hosting such applications. In the capital cost of a business offering a server-based application, this is the divisor.For years, efficiency hasn't mattered much in most end-user applications. Developers have been able to assume that each user would have an increasingly powerful processor sitting on their desk. And by Parkinson's Law, software has expanded to use the resources available. That will change with server-based applications. In that world, the hardware and software will be supplied together. For companies that offer server-based applications, it will make a very big difference to the bottom line how many users they can support per server.In some applications, the processor will be the limiting factor, and execution speed will be the most important thing to optimize. But often memory will be the limit; the number of simultaneous users will be determined by the amount of memory you need for each user's data. The language can help here too. Good support for threads will enable all the users to share a single heap. It may also help to have persistent objects and/or language level support for lazy loading.
9 Time The last ingredient a popular language needs is time. No one wants to write programs in a language that might go away, as so many programming languages do. So most hackers will tend to wait until a language has been around for a couple years before even considering using it.Inventors of wonderful new things are often surprised to discover this, but you need time to get any message through to people. A friend of mine rarely does anything the first time someone asks him. He knows that people sometimes ask for things that they turn out not to want. To avoid wasting his time, he waits till the third or fourth time he's asked to do something; by then, whoever's asking him may be fairly annoyed, but at least they probably really do want whatever they're asking for.Most people have learned to do a similar sort of filtering on new things they hear about. They don't even start paying attention until they've heard about something ten times. They're perfectly justified: the majority of hot new whatevers do turn out to be a waste of time, and eventually go away. By delaying learning VRML, I avoided having to learn it at all.So anyone who invents something new has to expect to keep repeating their message for years before people will start to get it. We wrote what was, as far as I know, the first web-server based application, and it took us years to get it through to people that it didn't have to be downloaded. It wasn't that they were stupid. They just had us tuned out.The good news is, simple repetition solves the problem. All you have to do is keep telling your story, and eventually people will start to hear. It's not when people notice you're there that they pay attention; it's when they notice you're still there.It's just as well that it usually takes a while to gain momentum. Most technologies evolve a good deal even after they're first launched-- programming languages especially. Nothing could be better, for a new techology, than a few years of being used only by a small number of early adopters. Early adopters are sophisticated and demanding, and quickly flush out whatever flaws remain in your technology. When you only have a few users you can be in close contact with all of them. And early adopters are forgiving when you improve your system, even if this causes some breakage.There are two ways new technology gets introduced: the organic growth method, and the big bang method. The organic growth method is exemplified by the classic seat-of-the-pants underfunded garage startup. A couple guys, working in obscurity, develop some new technology. They launch it with no marketing and initially have only a few (fanatically devoted) users. They continue to improve the technology, and meanwhile their user base grows by word of mouth. Before they know it, they're big.The other approach, the big bang method, is exemplified by the VC-backed, heavily marketed startup. They rush to develop a product, launch it with great publicity, and immediately (they hope) have a large user base.Generally, the garage guys envy the big bang guys. The big bang guys are smooth and confident and respected by the VCs. They can afford the best of everything, and the PR campaign surrounding the launch has the side effect of making them celebrities. The organic growth guys, sitting in their garage, feel poor and unloved. And yet I think they are often mistaken to feel sorry for themselves. Organic growth seems to yield better technology and richer founders than the big bang method. If you look at the dominant technologies today, you'll find that most of them grew organically.This pattern doesn't only apply to companies. You see it in sponsored research too. Multics and Common Lisp were big-bang projects, and Unix and MacLisp were organic growth projects.
10 Redesign "The best writing is rewriting," wrote E. B. White. Every good writer knows this, and it's true for software too. The most important part of design is redesign. Programming languages, especially, don't get redesigned enough.To write good software you must simultaneously keep two opposing ideas in your head. You need the young hacker's naive faith in his abilities, and at the same time the veteran's skepticism. You have to be able to think how hard can it be? with one half of your brain while thinking it will never work with the other.The trick is to realize that there's no real contradiction here. You want to be optimistic and skeptical about two different things. You have to be optimistic about the possibility of solving the problem, but skeptical about the value of whatever solution you've got so far.People who do good work often think that whatever they're working on is no good. Others see what they've done and are full of wonder, but the creator is full of worry. This pattern is no coincidence: it is the worry that made the work good.If you can keep hope and worry balanced, they will drive a project forward the same way your two legs drive a bicycle forward. In the first phase of the two-cycle innovation engine, you work furiously on some problem, inspired by your confidence that you'll be able to solve it. In the second phase, you look at what you've done in the cold light of morning, and see all its flaws very clearly. But as long as your critical spirit doesn't outweigh your hope, you'll be able to look at your admittedly incomplete system, and think, how hard can it be to get the rest of the way?, thereby continuing the cycle.It's tricky to keep the two forces balanced. In young hackers, optimism predominates. They produce something, are convinced it's great, and never improve it. In old hackers, skepticism predominates, and they won't even dare to take on ambitious projects.Anything you can do to keep the redesign cycle going is good. Prose can be rewritten over and over until you're happy with it. But software, as a rule, doesn't get redesigned enough. Prose has readers, but software has users. If a writer rewrites an essay, people who read the old version are unlikely to complain that their thoughts have been broken by some newly introduced incompatibility.Users are a double-edged sword. They can help you improve your language, but they can also deter you from improving it. So choose your users carefully, and be slow to grow their number. Having users is like optimization: the wise course is to delay it. Also, as a general rule, you can at any given time get away with changing more than you think. Introducing change is like pulling off a bandage: the pain is a memory almost as soon as you feel it.Everyone knows that it's not a good idea to have a language designed by a committee. Committees yield bad design. But I think the worst danger of committees is that they interfere with redesign. It is so much work to introduce changes that no one wants to bother. Whatever a committee decides tends to stay that way, even if most of the members don't like it.Even a committee of two gets in the way of redesign. This happens particularly in the interfaces between pieces of software written by two different people. To change the interface both have to agree to change it at once. And so interfaces tend not to change at all, which is a problem because they tend to be one of the most ad hoc parts of any system.One solution here might be to design systems so that interfaces are horizontal instead of vertical-- so that modules are always vertically stacked strata of abstraction. Then the interface will tend to be owned by one of them. The lower of two levels will either be a language in which the upper is written, in which case the lower level will own the interface, or it will be a slave, in which case the interface can be dictated by the upper level.
11 Lisp What all this implies is that there is hope for a new Lisp. There is hope for any language that gives hackers what they want, including Lisp. I think we may have made a mistake in thinking that hackers are turned off by Lisp's strangeness. This comforting illusion may have prevented us from seeing the real problem with Lisp, or at least Common Lisp, which is that it sucks for doing what hackers want to do. A hacker's language needs powerful libraries and something to hack. Common Lisp has neither. A hacker's language is terse and hackable. Common Lisp is not.The good news is, it's not Lisp that sucks, but Common Lisp. If we can develop a new Lisp that is a real hacker's language, I think hackers will use it. They will use whatever language does the job. All we have to do is make sure this new Lisp does some important job better than other languages.History offers some encouragement. Over time, successive new programming languages have taken more and more features from Lisp. There is no longer much left to copy before the language you've made is Lisp. The latest hot language, Python, is a watered-down Lisp with infix syntax and no macros. A new Lisp would be a natural step in this progression.I sometimes think that it would be a good marketing trick to call it an improved version of Python. That sounds hipper than Lisp. To many people, Lisp is a slow AI language with a lot of parentheses. Fritz Kunze's official biography carefully avoids mentioning the L-word. But my guess is that we shouldn't be afraid to call the new Lisp Lisp. Lisp still has a lot of latent respect among the very best hackers-- the ones who took 6.001 and understood it, for example. And those are the users you need to win.In "How to Become a Hacker," Eric Raymond describes Lisp as something like Latin or Greek-- a language you should learn as an intellectual exercise, even though you won't actually use it:Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.
If I didn't know Lisp, reading this would set me asking questions. A language that would make me a better programmer, if it means anything at all, means a language that would be better for programming. And that is in fact the implication of what Eric is saying.As long as that idea is still floating around, I think hackers will be receptive enough to a new Lisp, even if it is called Lisp. But this Lisp must be a hacker's language, like the classic Lisps of the 1970s. It must be terse, simple, and hackable. And it must have powerful libraries for doing what hackers want to do now.In the matter of libraries I think there is room to beat languages like Perl and Python at their own game. A lot of the new applications that will need to be written in the coming years will be server-based applications. There's no reason a new Lisp shouldn't have string libraries as good as Perl, and if this new Lisp also had powerful libraries for server-based applications, it could be very popular. Real hackers won't turn up their noses at a new tool that will let them solve hard problems with a few library calls. Remember, hackers are lazy.It could be an even bigger win to have core language support for server-based applications. For example, explicit support for programs with multiple users, or data ownership at the level of type tags.Server-based applications also give us the answer to the question of what this new Lisp will be used to hack. It would not hurt to make Lisp better as a scripting language for Unix. (It would be hard to make it worse.) But I think there are areas where existing languages would be easier to beat. I think it might be better to follow the model of Tcl, and supply the Lisp together with a complete system for supporting server-based applications. Lisp is a natural fit for server-based applications. Lexical closures provide a way to get the effect of subroutines when the ui is just a series of web pages. S-expressions map nicely onto html, and macros are good at generating it. There need to be better tools for writing server-based applications, and there needs to be a new Lisp, and the two would work very well together.
12 The Dream Language By way of summary, let's try describing the hacker's dream language. The dream language is beautiful, clean, and terse. It has an interactive toplevel that starts up fast. You can write programs to solve common problems with very little code. Nearly all the code in any program you write is code that's specific to your application. Everything else has been done for you.The syntax of the language is brief to a fault. You never have to type an unnecessary character, or even to use the shift key much.Using big abstractions you can write the first version of a program very quickly. Later, when you want to optimize, there's a really good profiler that tells you where to focus your attention. You can make inner loops blindingly fast, even writing inline byte code if you need to.There are lots of good examples to learn from, and the language is intuitive enough that you can learn how to use it from examples in a couple minutes. You don't need to look in the manual much. The manual is thin, and has few warnings and qualifications.The language has a small core, and powerful, highly orthogonal libraries that are as carefully designed as the core language. The libraries all work well together; everything in the language fits together like the parts in a fine camera. Nothing is deprecated, or retained for compatibility. The source code of all the libraries is readily available. It's easy to talk to the operating system and to applications written in other languages.The language is built in layers. The higher-level abstractions are built in a very transparent way out of lower-level abstractions, which you can get hold of if you want.Nothing is hidden from you that doesn't absolutely have to be. The language offers abstractions only as a way of saving you work, rather than as a way of telling you what to do. In fact, the language encourages you to be an equal participant in its design. You can change everything about it, including even its syntax, and anything you write has, as much as possible, the same status as what comes predefined.Notes[1] Macros very close to the modern idea were proposed by Timothy Hart in 1964, two years after Lisp 1.5 was released. What was missing, initially, were ways to avoid variable capture and multiple evaluation; Hart's examples are subject to both.[2] In When the Air Hits Your Brain, neurosurgeon Frank Vertosick recounts a conversation in which his chief resident, Gary, talks about the difference between surgeons and internists ("fleas"):Gary and I ordered a large pizza and found an open booth. The chief lit a cigarette. "Look at those goddamn fleas, jabbering about some disease they'll see once in their lifetimes. That's the trouble with fleas, they only like the bizarre stuff. They hate their bread and butter cases. That's the difference between us and the fucking fleas. See, we love big juicy lumbar disc herniations, but they hate hypertension...."
It's hard to think of a lumbar disc herniation as juicy (except literally). And yet I think I know what they mean. I've often had a juicy bug to track down. Someone who's not a programmer would find it hard to imagine that there could be pleasure in a bug. Surely it's better if everything just works. In one way, it is. And yet there is undeniably a grim satisfaction in hunting down certain sorts of bugs.
网友评论