ImageVerifierCode 换一换
格式:DOC , 页数:18 ,大小:111KB ,
资源ID:823852      下载积分:10 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 微信支付   
验证码:   换一换

加入VIP,免费下载资源
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.wodocx.com/d-823852.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(单片机英文参考文献翻译.doc)为本站会员(精***)主动上传,沃文网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知沃文网(发送邮件至2622162128@qq.com或直接QQ联系客服),我们立即给予删除!

单片机英文参考文献翻译.doc

1、 天津科技大学外文资料翻译 天津科技大学毕业生外文资料翻译一英文原文Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009Maurice Wilkes The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental c

2、omputers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transie

3、nts must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did

4、 in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early

5、 years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right

6、. I remember that we had some difficulty in organising a conference because the power engineers ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when t

7、here was no earlier practiceConsolidation in the 1960s By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those

8、years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineer

9、s of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly. Soon it was found possible to put more than one transistor on the same bit of silicon, and t

10、his was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of o

11、ne another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of h

12、aving one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry no

13、t to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer indu

14、stry going the other way. As time goes on people get more for their money, not less. Research in Computer Hardware. The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of

15、 integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomput

16、er with fancy capability logic. The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, us

17、ers had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed t

18、o embark on the project for the Cambridge Ring. The RISC Movement and Its Aftermath Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done

19、 and on the whole the choice of features depended upon the designers intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a strik

20、ing acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was n

21、ot new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the ne

22、w design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and

23、soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of compute

24、r farm. The x86 Instruction Set Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable mea

25、sure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other re

26、asons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can

27、be attained. Nevertheless, things might have been different if Intels major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it t

28、o be used in a workstation. There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead,

29、designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed ove

30、r to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Pattersons books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4

31、, 157-8. The IA-64 instruction set. Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would

32、 have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite. Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 ex

33、tra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer. In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 woul

34、d be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a la

35、rger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have,

36、 however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry. AMD have defined a 64 bit instruction set that is more compatible with x8

37、6 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD. The Rel

38、entless Drive towards Smaller Transistors The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getti

39、ng smaller. It was therefore possible to have, at the same time, both high density and high speed. There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shri

40、nkage makes it possible to get more chips on a wafer, the cost per chip goes down. Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one p

41、roduct for the entire market. However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two

42、 inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as

43、increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk. The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips

44、 made in that technology. At the present time, production of 90 nm chips is still building upSuspension of Law In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lectu

45、re that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphys law.or Sods law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to

46、 compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student

47、days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and t

48、he second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time. The single-chip com

49、puter At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time. Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that o

版权声明:以上文章中所选用的图片及文字来源于网络以及用户投稿,由于未联系到知识产权人或未发现有关知识产权的登记,如有知识产权人并不愿意我们使用,如有侵权请立即联系:2622162128@qq.com ,我们立即下架或删除。

Copyright© 2022-2024 www.wodocx.com ,All Rights Reserved |陕ICP备19002583号-1 

陕公网安备 61072602000132号     违法和不良信息举报:0916-4228922