给大家av百度云网盘资源分享点资源!

倾注编者多年心血收集和筛选,现在将这个(互联网,新媒体行业的资源列表)大礼包送给大家!记得分享哦! - NewMediaCafe的日志,人人网,NewMediaCafe的公共主页
招募社区运维经理,要求具备一定话题策划、写作能力,擅长沟通与协调、有较好的管理能力,踏实肯干,主动性强,有网站论坛运营实习经验者优先,此职位可全面接触网络社区运维多个环节。同时招募协助其工作的实习生助理。工作地点:无锡。希望尽快入职,简历至at.chen@nmclabs.org。
倾注编者多年心血收集和筛选,现在将这个(互联网,新媒体行业的资源列表)大礼包送给大家!记得分享哦!
|有态度的科技门户
|排名前三的科技门户
|排名前三的科技门户
|排名前三的科技门户
|IT资讯与极客社区
|IT社区与博客观点
|中国站长的大本营
|程序员最关注的社区
|草根创业者的精神家园
|技术编程前沿与讨论
|IT与通讯产业资讯
|IT资讯与网友评论
|营销者的信息中心
|中国媒介智库
|全世界各行各业联系起来,internet一定会实现
|全球领先移动互联网情报库
中国移动研究院主办的通信垂直门户
|专注于移动互联网创新&创业
|轻松同步,便捷移动
|独立科技观察站!
|关注移动互联网创新初创企业
|聚焦互联网前沿资讯
|探索谷歌的奥秘
|拇指资讯小众讨论
|关注互联网创业
|关注互联网新兴产品
|关注互联网前沿
|专注移动互联网产品
|关注互联网产品设计
|聚焦全球科技资讯
|汇聚全球科技资讯
|专注国际互联网创新
|专注 WordPress 和互联网
|网络趋势行销与开发
|全球知名科技博客(英文)
|全球知名科技博客(英文)
|社会化媒体博客(英文)
|技术创新资本资讯(英文)
|搜罗全球范围的创意(英文)
|全球知名科技博客(英文)
关注互联网与搜索引擎&&&
知名个人IT博客
|Paul Graham文集
|IT知名博客
|IT知名博客keso
|关注互联网与搜索引擎
|网络趋势行销与开发
|百姓网CEO
|网易产品总监
|专注PM的知名博客
数据调研类
&电子商务频道_电子商务研究平台
中国互联网络信息中心
中国互联网流量大盘 数据中心 互联网数据平台
互联网数据中心&&&
学术研究类
|北大新媒体创意产业研究
|美国加州圣荷塞州立大学新媒体教授Peter Young
|北京大学,新媒介批评者
|新媒体观察者 上海交通大学媒体与设计学院教师
|北京大学新闻与传播学院教授
|北京大学新闻传播学院常务副院长徐泓
|清华大学博士,北京师范大学教师,数字媒体与创意产业研究者,创始人何威
电子刊物及报告
|麦肯锡季刊
|全球最大的中文IT社区
|时刻关注企业软件开发领域的变化与创新
|最热的iphone开发社区
|分享和探索的技术社区
|PHPer自己的开源社区
|发布MSDN官网的最新消息,同时传播技术社区的声音
|致力于Android 中国的推广
|微软全面的云计算解决方案&&&
营销专业博客
|社会化媒体营销
|是一家服务于中国及全球品牌传播、广告创意行业的领先的在线媒体及产业互动资讯服务平台,是中国最受推崇、最具潜力、最有行业影响力的广告、传播专业网络媒体。
|了解最鲜活的互动界!
|品牌几何,数字时代的品牌建设前沿!
|关于营销的那点新鲜事
|专注视频广告,病毒广告
聚焦邮件营销
网络趋势行销与开发
社会化媒体营销分析
专注企业社交媒体策略研究
&社会化营销最佳实践官方微博
|社会化营销案例分享
社会化媒体营销探索与研究
数字营销平台互动通
数字传播机构
数字传播领域达人
|北京口碑互动副总裁,传媒边缘IT人
|易凯资本有限公司CEO
|北京灵动快拍信息技术有限公司市场总监、企赢网络营销策划机构创始人张何
上海九诚广告有限公司营销总监
|资深媒体人、专栏作家
|IT招聘微群创始人
|新媒体传播技术实验室NMC Labs联合创始人/华海资本(香港)科技合伙人
|Mp4works创始人,国外广告营销案例观察
|socialbeta创始人
|台湾作家、网络趋势评论家刘威麟
|博友咨询合伙人、犀牛人青年会创始人、《从菜鸟到总监》作者
|im2.0互动营销运营副总裁 孙雪峰
|阿里巴巴金融运营经理
|4399游戏董事长、著名天使投资人
|脚踏实地的创业者、草根创业、微博营销
|中海互动CEO,专注社会化营销
|华艺传媒创始人
|中国电子商务协会网络整合营销研究中心专家委员
|威汉营销传播集团 - WE Digital - Executive Director
上海悦普网络科技有限公司 总经理
公益领域科技博客一.机构博客
|The place for nonprofit and libraries
|专注非盈利性的营销博客
|专注协会,非盈利组织和俱乐部的会员软件
|非营利组织和社会事业的互动媒体和营销
|非盈利领域的新闻博客
&|帮助非盈利组织做出最明智的使用网络的决策
|使用移动技术的具有影响力的全球性网络
|利用互联网服务公益事业和非盈利组织
二.资源汇集网站
|聚焦公益事业的技术
三.个人博客
|网络型公益组织是如何利用社交媒体转型的
|通过培养和社区参与的组织,不断发展技术
|Marts and Lundy的独立顾问
|作家,教育家,教练,顾问
|帮助公益组织实施最有效的营销
|公益组织的技术咨询,适当的培训和写作,有效的解决方案
|足球之父,公益事业的互联网战略家
|公共利益律师事务所的IT总监
相关专业博客
|专业PM站点
|以用户为中心的社区
|SEO基础指南
|时代嘉道企业机构 上海嘉道信息技术 复旦新闻-时代嘉道网络传播影响力实验室 领先的互联网第三方测评机构
|分享互联网各领域最新资讯、创业心得、创业融资、运营管理和优化技巧等,帮助互联网创业者快速成长!
《南方周末》新媒体部|关注传统媒体转型、移动应用开发、数字出版、网络媒体等领域
|提供尽可能多办公条件,创业者办公成本为一杯咖啡
Pmcaff产品经理沙龙&&上面的各种资源列表并不完善,非常欢迎大家向我们提供更多的资源推荐。推荐可以直接留言,也可以直接到我们的WIKI 编辑页进行添加。让这份资源列表成为新人学习的导航。&n...
阅读(1578)|
人人移动客户端下载单片机方面的英文翻译(5000字)包括完整的中文翻译!我刚找到的好资源给大家分享 - 下载频道
- CSDN.NET
&&&&单片机方面的英文翻译(5000字)包括完整的中文翻译!我刚找到的好资源给大家分享
单片机方面的英文翻译(5000字)包括完整的中文翻译!我刚找到的好资源给大家分享
微机发展简史
IEEE的论文 剑桥大学,
莫里斯 威尔克斯
计算机实验室
第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。
最初实验用的计算机是由象我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。
在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。
在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。
60年代的巩固阶段
60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。
综上所述,晶体管开始代替正空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。
小规模集成电路和小型机
很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,作成一个计算机或其他的东西。
这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。
随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。
计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。
硬件的研究
我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。
7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。令牌环设计研究的发表先于以太网。在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。
令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。
精简指令计算机的诞生
早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。
1980年,RISC运动改变了微机世界。该运动是由Patterson 和 Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。
除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的。。。。。。。。。。。。。(很长)
Progress in Computers
Prestige Lecture delivered to IEE, Cambridge, on 5 February 2004
Maurice Wilkes
Computer Laboratory
University of Cambridge
The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.
These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must
what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.
As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.
In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice
Consolidation in the 1960s
By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.
Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.
Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.
These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.
Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.
The fall in the cost of computing did not start w it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.
Research in Computer Hardware.
The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.
The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.
Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.
The RISC Movement and Its Aftermath
Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.
In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.
Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers
The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.
Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.
The x86 Instruction Set
Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.
This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.
There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.
In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my
see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.
The IA-64 instruction set.
Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.
Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.
In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.
Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.
Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.
AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]
The Relentless Drive towards Smaller Transistors
The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.
There was a further advantage. Chips are made on discs of silicon, known as 。。。。
只截了一部分 大家下下来看吧!!!
若举报审核通过,可奖励20下载分
被举报人:
举报的资源分:
请选择类型
资源无法下载
资源无法使用
标题与实际内容不符
含有危害国家安全内容
含有反动色情等内容
含广告内容
版权问题,侵犯个人或公司的版权
*详细原因:
您可能还需要
开发技术下载排行

我要回帖

更多关于 大家可以分享点种子吗 的文章

 

随机推荐