Intel's Double Conroe will be literally hot
Clovertown will be 110 watts typical with a 1066MHZ FSB. Intel's 110 watts is like AMD's 150 watts.
You may wonder why the quad-core Clovertown has a slower FSB (Woodcrest's FSB is 1333MHZ). It is the same reason why Xeon MP has lower FSB than Xeon DP. On a shared bus, when you put more loads on it, the frequency has to be lowered due to interference from the loads. With Woodcrest, each core has 677MHZ bandwidth. With Clovertown, each core will have 1066/4 = 266MHZ (about 2GB/s).
Imagine you double the heatsinks (one for CPU, one for chipset) in the picture on the right, and you have Clovertown.
BTW: AMD's quadcore will be as cool as its dual core.
41 Comments:
I did an analysis here.
INTEL, the darlings of the chip underworld. I’m not a techy or a computer geek. But, I have made millions staying a head of the curve. I read blog’s to glean insight from people who lay awake at night thinking about things that I need to know about. Then spew it out for free.
What have I learned from the AMD / Intel war reading blogs?
Intel folks will argue 32 bit is OK, incredible. Out dated architectures are OK as long as you can afford to amp-up a few mega-wows of cache. Intel has a lot of fabs making buggy-whips, but, but we do it at 45nm they say, OK Intel makes a lot of small buggy-whips. Intel guys really hate the way AMD has pistol whipped chipzilla for the past four years so they say anything to vent their frustration, even to the point of overlooking dark Intel business practices.
AMD folks are having so much fun at the top that they are having a hard time keeping humble and can’t resist ever opportunity to urinate on the Intel guys for days gone by. The AMD camp has so much ammo at there disposal with their state of the art silicone they shoot at anything and with lethal braggadocios bullets (so much pain).
I love this stuff. Its obvious Intel management has never read the book “Art of War” or they wouldn’t be making so many tactical errors, like pissing off your enemy for one.
So you ask, what side am I on? I personally don’t care. I’m a scum-bag predator drawn to the Intel pile of sh*t by all the flies. The looser in this tech-no price war is so obvious. Losers are winners when you’re when you buy stocks short. This war is on the front page and every short seller on the planet is licking his chops and those flies I was talking about, they’re my friends. When the LORD of the flies jumps in to feed we all jump in.
Intel guys, find another lover because even if AMD doesn’t finish Intel off the flies will. Sorry, its business.
That's because AMD is switching to 65nm, whereas intel is already there and with will stay there with their quad core. So inevitably AMD will save power and intel will not.
However, my main concern is the reduced specs and the lame ass attempt by intel to produce a quad core chip by slapping two dual cores together. I think that is incredibly pathetic.
"Intel guys, find another lover because even if AMD doesn’t finish Intel off the flies will. Sorry, its business."
Hey... although I still don't understand how you seem to benefit from the price war so easily, your comment on both sides of the fight was very enjoyable to read. :-)
"However, my main concern is the reduced specs and the lame ass attempt by intel to produce a quad core chip by slapping two dual cores together. I think that is incredibly pathetic."
No, what is incredibly pathetic is the first Dual-Core pentium's, it was literally 2 processors STACKED ONTOP of each other. Alienware sold these systems for a very short time, because they kept melting. It was truly pathetic.
Who pays you to write these falsified blogs. Write something which makes sense.
While 110W is high, it's still lower than the current 130W TDP of the 965EE. For a quad core it really isn't that unreasonable considering AMD's FXs have a 125W TDP. (Not directly comparable I know, but high nevertheless). AMD's 65nm process may help, but so far we've only heard it being introduced on mid range X2s in December, with the FX series staying on the tried 90nm process. (The same procedure as the 130nm-90nm transistion).
You're right though, a 1066MHZ FSB is going to impact performance. Intel needs to release on a 1333MHz FSB, which is entirely possible given that samples available already can reach 3.62GHz on a 1446MHz FSB using the older stepping 4.
http://www.xtremesystems.org/forums/showthread.php?t=104773&page=2
Thermal issues aren't really the issue for the a 1333MHz FSB given that both the 1066MHZ FSB 2.67GHz Conroe and the 1333MHz FSB 2.67GHZ Woodcrest both have a 65W TDP. The limitation is just signal strength and board compatibility. We should hope that Intel figures out this 1333MHz FSB question (it really shouldn't be that hard) before January, although you yourself probably won't be wishing for the best for Intel.
dude, you're talking about the higher end of the TDP for Cloverton/Tigerton. If you look athe Intel roadmaps, you'll see that the 110/120 watt quad core processors are the best performance ones while the 80 watt one (that you forgot to mention) will be the maintstream one...
AND in the 2nd half of 2007, Intel is supposed to release a single die, quad core processor based on the 45nm process so by the time AMD has theirs out, Intel should be pretty close to their quad core, single die launch date.
They are good at slapping things together.
I speculated that they might market something call 'dual-core North Bridge' for its desktop-rate dual-FSB solution and flood the market with "The World First Dual-Core North Bridge" and ends up double the weight of the mobo (and its cost) just because of tremendous cooling of the NB chip... unless Intel would sacrifice part of its 45nm fab for the NB.
And while they are at it, they could slap 2 dual-core NB together and have quad-FSB.
... and there we have the world smartest/fastest heaters.
AMD's 4x4 is the way to go. As long as the OS has good NUMA support, it gives a lot more flexibility to the customer.
And if prices stay the way they are now, then two lower end dual cores will be far less money than one top end dual core. So you are getting 4 cores for the price of two. And when the price of quad cores is reasonable, then, by extension, you will have two low end quad cores being easily more powerfuly than a top end quad core. So 8 cores for the price of 4!
[Please note... single threaded benchmarks will still run faster with a higher clock speed processor... but multi-threaded systems will be incredibly fast and smooth with more cores... capable of lifting more and doing more]
Thus I am saving my money until 4x4 is available on the desktop. This will be the true price/performance winner. For smart people vs. fanboys.
For the servers at my company, we are waiting until the next generation Opterons. Our current Opteron systems have been amazingly good and we are sure the next generation will be the same.
"I love this stuff. Its obvious Intel management has never read the book “Art of War” or they wouldn’t be making so many tactical errors, like pissing off your enemy for one."
Intel lives by another motto in an everlasting war against any and all that are not Intel:
"By way of deception thou shalt do war."
This credo governs all Intel activities, including, notably, benchmarketing.
yeah, and the fact that they expect the fsb to be able to handle it. So, how much bandwidth is each core getting? lol
according to TGdaily,
"will not only grow in die size, but also be rated at a MAXIMUM POWER CONSUMPTION of 110 watts".
that means max TDP is 110.
I think that main war is not about quad-core but cheap dual-core construction. The question is what would be a margin for X2 3800+ or even cut to 3600+ (2x256KB of cache) and 2MB Conroe. Who is going to win here is going to conquer the market. Midrange is also important and I think that this is the reason why Brisbane is the first aim for 65nm.
Cost, margins, availability.
AMD has one problem here - 90nm Fab which should be converted to 65nm ASAP, not in 2008. Or maybe the projection of 65nm capactity is so optmistic that from one fab AMD wants to feed the whole market?
4x4, Quad-core is a top-end market with 5-7% share but very interesting .
Intel is stuck with an old architecture, and they look me like a engineer who has to manage a dam who starts to have big hole in it. He continues to patch, patch, patch but at the end the dam will crumble.
Intel has got no innovative ideas from ages, they always chose the "pure strength - using muscles no brain" solution, based on capacity and production advantages, instead of building a cleaner and more advanced architecture. These behaviour payed till now, but Intel has been blown away by AMD's new fabs.
No IMC, no HT, no 64bit, no SOI, no NX, no this no that. The only soklution was a wrong one, Hyper threading. Now that kind of approach is going to limit, for years, Intel's performance. They say 45 nm level will rescue them. I doubt: they told the same with the 130 to 90 nm passage and the onluy result was more heat and less performance per clock.
No, what is incredibly pathetic is the first Dual-Core pentium's, it was literally 2 processors STACKED ONTOP of each other. Alienware sold these systems for a very short time, because they kept melting. It was truly pathetic.
... wtf thats new to me, so intel was allready ahead of its own plans to stack dies?
afair they plan on doing this somewhere in 2015.
But if you mean the normal 90nm Pentium Ds (smithfield) they are just 2 prescotts next to each other....
"you'll see that the 110/120 watt quad core processors are the best performance ones while the 80 watt one (that you forgot to mention) will be the maintstream one..."
You'll see that a 110w Clovertown is only 2.66GHz...a 3.0GHz would be well over 130w and beyond 3.0GHz...well....yea...
80w are probably 2.1GHz -- Meaning low power and LOW PERFORMANCE. While AMD = 2.6GHz Quad-Core = 55w
All Intel fanboys should despise what Intel's been doing, even with Conroe & Woodcrest out.
Core2 is a decent core, no doubt, but Intel still forces us to accept this stone age FSB; imagine how much Core2 would perform if a high-speed crossbar memory controler was there? Why doesn't Intel do that? Initially because it'd compete with Itanium, now because AMD already did it. Any reason that's good for customers? (Don't tell me that would bind CPU & chipset together - what's the last time you changed a motherboard but retain the CPU, because a better chipset is out?)
We all know power consumption is important, but Intel's only giving typical power usage. Anyone knows that CPUs of the same model vary among each other, so if one Intel mobile chip happens to burn 10W more and bursted, you can't blame Intel, because it's YOUR workload problem. INTEL will tell you what is typical usage; INTEL will tell you what CPU is qualified for it. Is there any good will to Intel fans?
So multi-processing is going to be a norm, but Intel'd rather give us slapped-together multicores than a good HT links that binds older or slower processors together. With AMD, you can buy a CPU with 2 HT links today, and use it on 4x4 as a coprocessor to a faster CPU 2 years later - it's at least better than throwing it away, which is what Intel wanted us to do with today's Conroe.
Sure, after 3 years Intel finally comes up with a decent core. But recognize it, Intel could've done better if it would; it could've been more customer-centric than it is now. We as its customers are simply stripped the basic rights for better treatment due to Intel's semi-monopoly, even after AMD has become a viable alternative.
Seeing so many back and forth discussions from Intel fans, I have to say this, that I had been myself an Intel fan a few years ago, but over the years I gradually realize that Intel doesn't give a shit to its customers. Unless you're Dell or google, of course.
Oh, yeah, BTW, Intel's stocks have been doing great from 1980 to 2000. If that's why you love it, for this one reason I'd agree. But face it, the days that Intel was an engineering power house for growth is over. It is now a giant marketing firm with lies and deceptions to BOTH its customers and shareholders.
I really hope it is not, and it will change, though.
Yes, quad core is supposed to consume more power than dual core (the difference in transistor count is obvious), so there is nothing to be surprised of. And the comparison with AMD is not fair, because they said that quad cores will consume the same amount of power as dual cores TODAY, AMD dual cores TODAY are 90nm, but quad cores will only come in the 65nm process. The transistor shrink will compansate the transistor difference.
Nevertheless, I do believe Clovertown will be inferior to the AMD offering, both in Power consumption and performance. We could call Clovertown "The return of the Pentium D". But Clovertown will probably be a temporary solution. The difference between Clovertown and the next quadcore from Intel will be as big as with Pentium D and Conroe.
"The difference between Clovertown and the next quadcore from Intel will be as big as with Pentium D and Conroe."
Yeah, i agree. It'll have a double decker front side bus by then ;)
Clovertown "a temporary solution"? Till when?? 2008? 2009? The problem with Intel is simple as this: They never had to fight FAIRLY against an opponent, they never fought without tricks. Thei rmarket was all controlled by them, and doped and poisoned by Intel. So, they never needed to "evolve" their basic architecture - which is struck in his basics at Pentium stuff. They always acted NOT for benefit of himself and the customer bu selfishly only, because they hadn't competition. As Sharikou pointed out many times here, they developed a no-competition mentality in the corporation because there wasn't a competitor. So, they were never forced to innovate. Most of Intel's progress in last ten years are only due to "silicon production" improvement: AMD, at the other side, being against a wall, was forced to innovate, was like an "innovate or go bk" situation.
Now is all over: Intel ha to innovate, but to me changing mentality and approach to market and to customers is waaaaay harder than changing technology or silicon technique.
Your comments are becoming comical...
"Imagine you double the heatsinks in the picture on the right, and you have Clovertown."
The heatsinks out there now for the 130W netburst is more than capable of cooling that prcessor, by all means not the best solution, but it will work and you don't have to double the size.
Its only a thought, but how many readers will you have left when you keep making bogus comments?
Thanks.
"intel ha to innovate, but to me changing mentality and approach to market and to customers is waaaaay harder than changing technology or silicon technique."
Just look at how quickly Intel made the right hand turn and released 64-bit extensions to x86 when needed. Look how quickly the NetBurst architecture will be raplaced with Core and Core 2. Look at Intel leadership in virtualization technology. Intel is near 2nd gen virtualization technology and third gen will not be too far behind. It is ridiculous to say that Intel haven't innovated. Utter rhetoric. Just look at Intel patents granted vs. AMD. You AMD fanbois are nuts and diarrhea spews forth from your keyboards like from a baby's arse.
Intel's innovation??? WHERE?
The fact that Core is proven to be 95.5% out P3 architecture means: no innovation, just tweaking. Wake up, P3 comes from 1995!!!
The only thing intel's has renewed constantly in later years is.... sockets!!!.
You mean Rambus?? Flop.
You mean Itanic? Flop.
You mean Hyper threading? Flop.
You mean Pentium D? Ultra-flop.
You mean still FSB bound? Flop, at least outside desktop low level Pcs.
FB-DIMM? We will see, to me it's another potential bottleneck.
High power demand comes not only from the core but due to the enormous amount of L2 cache they need to reach decent performance. L2 cache eats watts (and silicon). Let's say Core and his brothers consumes like an Opteron: a new, marvelous, 65 nm core consumes like a 4 years 90 one. Not bad... not bad at all. Remember that Intel promised, shifting from 130 to 90 nm, a great reduction in power needs: we saw an increase.
AMD innovation? At least in x86 field.
- 64 bit performance. Intel denied the possibility to run it under IA-32 architecture, then patched them horribly and now it seems at a decent level: but AMD will push them back again with new improvements in sight.
- IMC, Integrated memory controller, got rid of that old FSB concept. No more northbridge, bringing together simpler, less expensive, motherboards.
- True dual (and multiple) core structure through Direct connect architecture.
- Hypertransport.
- No execution bit.
- SOI and Dual stress liner SOI which enables more speed in transistors switching and more power efficiency >> less heat dissipation. To me this is the reason why quad cores will consume the same as dual cores Athlons.
I'm not an AMD fanboy but come on!!! Who's innovative and who's not??
Just look at how quickly Intel made the right hand turn and released 64-bit extensions to x86 when needed. Look how quickly the NetBurst architecture will be raplaced with Core and Core 2.
It's took Intel 5 years to get AMD64 instruction set partially cloned. Originally, Intel worked on P4, took 3 years to do it, but it was broken. Intel spent 6 more months to fix it. The EM64T implementation in CORE2 still missing some key features of AMD64, such as IO-MMU (the ability for IO to DMA at addresses above 4GB).
Intel is still 4 generations behind AMD. Intel doesn't have true multi-core, it doesn't have IMC, it doesn't have direct IO, it doesn't have direct processor-processor link. It will take Intel another 5 years to get these done.
for the sake of humanity, please stop this, please.....seriously though, AMD is just not ready or about to roll over and croak like yesterday news. They came a long way to just give up everything their ceo and employees work so hard for. I came from the celeron A days and was a devoted intel loyalist, but athlon came and the rest is history. this is an interesting read taking from amdzone.com quoting an article from this site about woodcrest. good read indeed.
http://www.itwire.com.au/content/view/4785/53/
"Your comments are becoming comical...
Its only a thought, but how many readers will you have left when you keep making bogus comments?"
You are even more comical. You claim his comments are bogus, yet you come to this site to read and comment on them. So we can all depend on You, can't we? :))
for the sake of humanity, please stop this, please.....seriously though, AMD is just not going to give up. They came a long way from the days of K6 or K7. Those who said that AMD has been complacent have no idea what they're saying or thinking. show me a company that's trying to vigorously capture market shares being complacent. just a tidbit about myself, I came from the celeron A days and was a devoted intel loyalist, but athlon came along and the rest is history. here's an interesting read about woodcrest's shared cache taking from amdzone.com quoting an article from this site about woodcrest. good read indeed.
http://www.itwire.com.au/content/view/4785/53/
"So we can all depend on You, can't we? :))"
I'm not sure if your trying to be funny or what. But I had actually mad a point.
Intel weather report; Hot with no relief until 2008
News Flash; terrorist group praises Intel for its exploding laptops, but claim they thought of it first.
My reactions on the news titles :
"Intel's new chip aimed at AMD - San Francisco Chronicle"........what, you mean their earlier chips were not ?? lol
"Intel Has Pixar, BMW Backing New Xeon Chip" Wall Street Journal ...Is your Journal read by morons??
"Intel says new chip won't gobble power - Boston Globe, United States" ....Yes Intel always tells the TRUTH !!!
"Can Intel Retain Lead with Woodcrest? - BetaNews" .....Excuse me !!! retain lead or "save its ass" LOL
"Intel: We're back - ZDNet " ....hmmmm I didnt read anywere that Intel was on a vacation ...lol
"Woodcrest launches Intel into a new era - TG Daily" -- Yes the era of the patched PIII architecture !!
"Intel Introduces Xeon Processor 5100 Series --TMCnet" wow you guys had to ensure the numbering started at 5000 ha ha ..
"Intel Serves Up Chip Speed - Red Herring, CA" -- Yes we at Red Herring are deaf,dumb and blind too ...!!
Sharikou, what exactly is wrong with using FB-DIMM's in servers?
Hey All Knowing Sharikou, I really love your blogs and the comments that i read here.... It always make me smile and laugh that i almost fell out of my chair... I can't beleive that there are lots of people that are So stupid and yet they feel that they are so Intelligent...keeep it up fanboys....
t always make me smile and laugh that i almost fell out of my chair
Ha ha ha, me too. I laughed all day debating with Intel fanboys on the double explosion from Intel's double core...
"what exactly is wrong with using FB-DIMM's in servers?"
FB-DIMM's have a higher latency than standard DIMM's due to the buffer. They also produce more heat and draw more power due to the buffer. FB-DIMM's allow up to 6-channels of Memory per MC and greater densities than standard DIMM's.
"Just look at how quickly Intel made the right hand turn and released 64-bit extensions to x86 when needed. Look how quickly the NetBurst architecture will be raplaced with Core and Core 2. Look at Intel leadership in virtualization technology. Intel is near 2nd gen virtualization technology and third gen will not be too far behind. It is ridiculous to say that Intel haven't innovated. Utter rhetoric. Just look at Intel patents granted vs. AMD. You AMD fanbois are nuts and diarrhea spews forth from your keyboards like from a baby's arse."
Innovate - introduce: bring something new to an environment; "A new word processor was introduced"
Unless you Intelish innovate which means copy a competitor's successful R&D and rebadge it as your own.
BTW don't rebadge my "blowing out words like chunks of shit out of your asses" to "diarrhea spews forth from your keyboards like from a baby's arse" ok? I'll take you to court and sue you.
I am curious but should a quad core thermally critically fail will there be enough explosive pressure to send the heatsink flying? Just a thought. Even if it doesn't explode and the caps does instead, is the motherboard manufacturer liable or Intel?
Imagine you double the heatsinks (one for CPU, one for chipset) in the picture on the right, and you have Clovertown.
That is no Intel Chipset, that is a NForce 590 SLI. Chipsets from intel are much less power greedy that those of ATI and Nvdia , but they are also slower.
www.dailytech.com/article.aspx?newsid=2739
Thank you very much google images.
"I'm not sure if your trying to be funny or what. But I had actually mad a point."
Ok, here's what u also said
"Its only a thought, but how many readers will you have left when you keep making bogus comments?"
Whether you agree w/him or not, you are Reading His Blog, aren't you.? Like INTC, say one thing and do another. Just like the launch of the Woody.
That is no Intel Chipset, that is a NForce 590 SLI. Chipsets from intel are much less power greedy that those of ATI and Nvdia , but they are also slower.
Okay, then just double the heatsink for the processor, you still end up with a TON of heatsink on that board.
"Remember that Intel promised, shifting from 130 to 90 nm, a great reduction in power needs: we saw an increase.
AMD innovation? At least in x86 field."
The increase in power was due to netburst architecture and increasing clock speeds NOT process technology shift (look at Intel mobile product space from 130nm to 90nm if you don't believe me)
As for your "AMD" innovations...
1. SOI? (Pretty sure that was IBM's innovation and AMD just bought/licensed it)
2. Dual stress liner (see #1)
3. IMC was not an AMD innovation; however they were the first to implement it in a widespread fashion
Sorry but AMD with the same architecture, going from 130 nm to 90 nm reduced heat and power consumption along with speed increase.
Intel made the exact opposite; they promised some things, reality was so different. Prescott in their words should ramp to lunar high clock speeds with little to no power increase. That wasn't the case,as all world as seen. I don't care, and to me - consumer - it doesn't matter at all, if it was due to architecture or technologies limits: it was a result of some choices.
If a car model needs more fuel and has less performance than another, costing the same amount of bucks,who cares if it's due to bad engineering or bad production tecnique? It's the final product that matter.
Face the reality: all Intel's architectural choices in the last years have been a failure.
I don't think that Intel's engineer are dumb, not at all: the problem with Intel's behaviour is that marketing drives technical choices.
- Rambus? Why Rambus? Because using it needs a license and fees, instead of being an open standard like DDR.
- Why Netburst? because you need more Gigahertz, cause marketing could sell that a 3 Ghz CPU is better than a 2 ghz one.
Who cares that they weren't the better technical solution?
Post a Comment
<< Home