Justin Rattner reminds people of his sins
Intel CTO tried his best to bash Opteron, a technology that is light years ahead of Intel's 1970s FSB based chips, in an apparent attempt to undo his damage to Intel last year.
Intel CTO tried his best to bash Opteron, a technology that is light years ahead of Intel's 1970s FSB based chips, in an apparent attempt to undo his damage to Intel last year.
19 Comments:
Given your description of the two technologies, it's amazing how somehow in all but 4P+ systems the latest chips based on that 1970's technologies all outperform AMD's latest offerings.
I guess design elegance is more important than ACTUAL performance?
I remember you said AMD is the one who innovates. I agree on many levels with that but at least in one thing Intel is (again) one step ahead and will jump from 1970 to around 2020.
This is Terascale computing. Basically it means there will be tera of everything: OPS, internal and external chip bandwidth. Seems rather interesting. Who needs FSB/CSI/HT when it can have insanely fast optical links.
Intel is also making some research on speculative multithreding together with that architecture. Basically it is roughly similar to the non-existent reverse hyperthreading but in coarser grain, probably in the range of few thousand instructions per chunck.
AMD is touting its K8L as first true quadcore CPU and it has every right to do so. I just wonder when will it have its own 80-core CPU :)
AMD gets its slap for its sins:
http://yahoo.reuters.com/news/articlehybrid.aspx?storyID=urn:newsml:reuters.com:20060927:MTFH30630_2006-09-27_03-37-43_N26231575&type=comktNews&rpc=44
Hello:
I am a bit lost here, I would really apreciate a bit more of information about this comment, even if it is a rehearsal of past topics. Besides, this new 4P opteron is still a 32 bit architecture, right?? So they are still far behind AMDs, did I undestand right??
Thanks in advance.
I agree with first post that's why I'm going to buy some AMD shares soon. :)
Intel is on the with FSB. 1066/1333FSB is the end. What's next? Nothing. Maybe 16MB cache? Or 32MB?
What will happen in 12 months when AM3 and HT 3.0 will come?
Maybe 45nm 64MB cache L2 is going going to help...but not for free...
You are very arrogant Sharikou.
AMD got their connection architecture from the Transputer.
Do you see any attribution by AMD?
Other than that ratty cheap system you just built, it really seems like you have zero experience with AMD -- and certainly no experience with Opterons and servers.
If you had used Opterons, you would know that, yes, they offer good performance. But this good performance is more a function of the CPU microarchitecture, not the direct connect architecture.
And if you had used Opterons, you would know many Opteron systems have a lot more bugs than comparative Intel systems. Bugs that cost a lot of money. Just ask Google about Nvidia chipsets and Opteron and you will get an earful.
AMD has gotten very cocky and very arrogant themselves of late. That "for Dummies" book was something that was totally unnecessary. It was the mark of an amateur.
Meanwhile, AMD is shipping nothing of interest. No KL8, no 4x4, no Torrenza. In other words, every single "innovation" (as you would call it) is vaporware.
And still with everything vaporware, there is the arrogance.
Isn't that becoming just like the very Intel that you endlessly profess to hate ?
Intel is a successful supercharge it till it blows manufacturer, while AMD is a prissy Italian supercar ;-)
Try pluggin sharikou into the firefox address bar lol :-))
Are you that desperate to stir something that you would bring this old crap up?
Someone said
"You are very arrogant Sharikou."
Mr PhD pretender, I couldn't agree more..
Sharikou beside lots of words on this blog where you claim you saw everything had advice that would have solved world hunger if people listend to you.
Can you share with us what realworld achievements merit us to believe you delivered anything.
"1066/1333FSB is the end"
No its not. FSB is capable of speeds > 2GHz. I myself am running ~1.8GHz FSB.
Though of cource other solutions would probably be often better than FSB.
In all seriousness, the IDF is revealing to us where Intel thinks the future of computing is- the Terascale computing initiative, Si photonics (a huge deal). All of these things are far from production- but the fact that a major manufacture is investing (and producing lab proof of concepts) in these technologies bodes well for the future of the industry.
Say all you like bad about Intel, but I don't see AMD contributing to future of the industry in the same fashion. HT3,4,5... and K8L,M,N,O,P isn't a fundamental shift in the industry. It isn't looking past CMOS and metal interconnect. If AMD really wants the level of respect Intel has earned, they will eventually need to step and help define the future. Not to say that they can't, just that the haven't...
"Honestly AMD should never have the lead against Intel, it's pathetic that Intel, with all of their resources, can't stay ahead in the game."
So your analysis is that a large company should ALWAYS outperform a smaller company...interesting theory.
And by interesting, I really mean dumb.
"When they can't "buy" the best design, they switch to brute forcing their way to the fastest chip, it worked for Conroe, what about the P4?"
If I'm in the market for a computer I'm going to but the one with best price/performance ratio - I honestly don't car if it has 64MB on die cache, HT5.0, SOI, embedded SiGe, stacked dies, glued dies, dies held together with ducttape - if it performs the best isn't that what matters.
If you had to but a computer today and were given the choice between a K8 and Core 2 for a desktop application are you saying you'd prefer the K8 because it is not a "brute force" design? (assuming you are like >90% of the population and are buying a new system and not upgrading one)
Is it just me, or does it seem like when Intel talks about their products, they have to compare it to AMD products. I remember a few years ago Intel would talk as though AMD didn't exist. Funny how things have changed.
And if you had used Opterons, you would know many Opteron systems have a lot more bugs than comparative Intel systems. Bugs that cost a lot of money. Just ask Google about Nvidia chipsets and Opteron and you will get an earful.
Buggy Opteron systems eh? An Opteron system composed of:
2 x Opteron 242
8 IDE disks
2 SCSI disks
one 3ware 7505
Tyan 2881 motherboard
350 (!!!) watt redundant power supply
has never given me any trouble.
Likewise, a newer system composed of a Tyan 2865 plus a AMD 4400+ X2 also has never given me any trouble except for the lousy 3DLabs Wildcat Realizm 500 graphics card. Switching to a lousy (price wise) ATI X1300 card gave me absolute stability.
"This is Terascale computing. Basically it means there will be tera of everything: OPS, internal and external chip bandwidth."
Any reason to believe it's not the incarnation of that dead-and-stink 10Ghz processors?
"the future of computing is- the Terascale computing initiative, Si photonics (a huge deal)"
Just to let you know, the "Si photonics" is still light years away from doing computation. As quoted from Intel's website:
The researchers believe that with this development, silicon photonic chips containing dozens or even hundreds of hybrid silicon lasers could someday be built using standard high–volume, low–cost silicon manufacturing techniques.
1. Do you see "dozens or even hundreds" on a chip? Now tell me how many outputs you'll need in a single 128-bit SSE ALU?
2. Do you see "could someday be built using standard...."? If you understood the language right, it's at best a possibility.
Yes, it is a huge thing for the researchers and the research community, but as I said, it's light years from the kind of commercial uses you had in mind.
Now why doesn't Intel just adopt AMD's HT and Torrenza, which would accelerate development and research cooperation for the whole industry?
Why do you want to build or buy an AM2 machine instead of a Core2:
1. For the present, you can spend less than $200 or even $150 and still get very good performance.
2. For the future, pretty much any AM2 motherboard will be able to plug in a native quad-core K8L a year later.
3. You will have more performance benefit when upgarding to 64-bit Vista.$
For most people (including me), the first two reasons are enough. Oh, I know you could plug a Kentsfield on a Conroe motherboard, but you really have to pick the right ones, and if your MB can't raise FSB like no tomorrow, it will be the bottleneck for quad-cores.
"And like the 90% that are building new machines, the majority of end users aren't looking to get a high end processor in their machines. Those of us that are here might, but we are an exception to the general population."
I said buy (actually "but" which was a typo) - just what % of the population do you think builds their own computer?
Your "90%" is 90% of what? 90% of a small # is still a small #...
My point is if just dropping in a CPU only, AMD makes sense, if you are not doing this (which constiutes probably >90-95 of desktop sales), the 150-300 Core2's seem like the best deal. And high end is clearly the 6600 or 6700 (extreme edition is waste of money)
"if your MB can't raise FSB like no tomorrow, it will be the bottleneck for quad-cores."
This is complete and unsubstantiated crap - do you have a single benchmark backing this up? It was definitely true on the netburst 2die/1 package solution but the Core 2 is a different architecture and does not use the FSB bandwidth nearly as heavily as the P4 architecture.
"You will have more performance benefit when upgarding to 64-bit Vista"
Edward please point us to this link again - you have made this claim several times now. I hope you're not refering to the article which showed AMD getting a bigger jump going 32-64 than Core 2, because if you look at those benchmarks the Core2 architecture was still outperforming the K8 on 64bit on an absolute scale (with exception of I think 1 or two benchmarks).
I'm assuming you are basing your statement on another test?
Post a Comment
<< Home