Sunday, July 15, 2007

The only thing that is keeping Intel alive

Intel is still alive for one single reason: AMD's complex 65nm SOI process is not producing the clockspeeds as expected.

As we have seen, although its 65nm SOI is low power, AMD is unable to push clockspeed higher. The X2 6000+ (3GHZ) is manufactured on 90nm process. The highest performing 65nm AMD processor is clocked at 2.6GHZ.

Once AMD optimize its 65nm process, Intel will be dead in less than three months.

PS: One reader made some reasonably articulated comments on AMD's strategic moves such as K10 design and ATI acquisition -- a rare thing from an Inteler. However, as an Inteler, he could only look forward seven inches. The x86 market is completely different today with AMD being the industry standard setter in server, desktop and mobile. Whatever Intel does, it is just copying AMD's ideas. The only thing that is holding AMD back is its apparent difficulty with 65nm SOI process. In terms of architecture, AMD is years ahead of Intel. Although Intel has copied AMD64 instruction set, NX bit and emulated multi-core, Intel's architecture is stuck with the primitive FSB design. Once AMD rolls out Direct Architecture 2.0 on fast clocks, Intel will look retarded. There are rumors that Intel will copy more from AMD, such as distributed memory architecture, direct connect and embedded memory controllers. But those are still on the paper.

AMD got everything in place to finish Intel. The only thing it needs to do now is to get 65nm SOI to work faster.

33 Comments:

Blogger Chuckula said...

The only thing that is keeping Intel alive.... are millions of customers who pay them more for chips that it cost Intel to make the chips.

AMD has part of the equation right... by nearly giving the chips away for free they can get plenty of customers (like 20% of a very big market). AMD's only problem is in the latter part of actually making more from the sale than it cost to make.

I'll give you credit Sharikou, you finally admitted that something somewhere at AMD is not 100% super-perfect! Now, I'm expecting you to spin things out into the future even further to that magical halcyon day when AMD finally gives us the chips you said would be here 4 months ago.

You know Sharikou, what you really remind me of are those nutjobs who have cults predicting the end of the world and stuff like that. It's always around the corner, and no matter what 'facts' might stand in the way of their prediction. Anybody else (sane) on the outside who uses facts and real information to show them how wrong they are is just demonized and insulted as some kind of infidel. Oh, and then when the prognosticated date arrives and absolutely nothing happens... just deny you ever made the prediction and claim you were right all along!

I thought K10 was supposed to be out in Q2 at 3.Ghz?
I thought you said you could buy K10 servers in July, so where the hell are they?
I thought you said a 2Ghz K10 could (in your retarded vernacular) 'pre-frag' a Penryn at 3.5Ghz?

Here's a big ol' dose of vitamin Truth: AMD turned into what Intel was when Intel was in its boneheaded-stupid period, which ran roughly from 2003-2005. Now I know the Core2 did not actually come out until 2006, but by 2005 Intel internally had finally gotten the message that Prescott & Netburst were stupid, and that IA64 was not coming to the desktop anytime within the next 20 years. A bunch of people at Intel who were pretty similar to some people on this board (willfully ignorant and blind to any facts) were either pushed aside or fired and Intel had the painful process of shifting gears in a hurry. However, from this pain came quite a bit of gain, as the Core2 shows (and as the fact that Intel is making a couple of Billion in profit per quarter shows).

Meanwhile, AMD had perpetually been the underdog for decades, and AMD was actually able to survive in this role. There always is a market for very cheap processors where performance is not that critical and AMD was structured to fill that niche better than Intel. It was not always a very profitable niche, but AMD eked out a living. Then came the K7 where AMD chips were first able to actually beat higher-end Intel CPUs (on and off), and finally the K8 which was an excellent michroarchitecture (after all AMD bought all the design talent for the K7/K8 from Digital who had made the Alpha). The K8 coupled with Intel's stupidity meant very good times for AMD from late 2003 until 2006.

And then things went to hell because AMD turned into what Intel had been. Did AMD focus like a laser on K10? No they didn't, they went off trying to design some overly exotic super-threaded chip that they could not even get to perform well in simulations (sounds a bunch like Itanium, except Intel never bet the farm on Itanium). You'll never see this chip, it was cancelled and K10 is the rushed replacement (AMD loves to brag about how its designs were 'native' for quad cores from the beginning... so why the hell is it taking so long to get K10 out the door, shouldn't it have been a 1 year project tops?)

Next, AMD had probably for the first time in its history an actual cash surplus of several billion dollars. They could go right out and build a new fab at 45nm and be competitive with Intel on process technology for perhaps the first time! Oh wait, instead AMD decided to purchase ATI.... once again, instead of focusing on the next step AMD got confused by actually having some cash to spend and had to 'diversify' into an area where there was no business case to expand. The trainwreck with the r600 is more than enough proof to show that. Oh, I know, AMD fanboys will start talking about 'fusion' as if I could go out and buy that stuff today. Well, while there are some interesting aspects of 'fusion', AMD could just as easily have signed a development contract with ATI for a couple of hundred million that would have had better results because ATI could focus on what it does best and not be killed in a takeover.

The bottom line: AMD got a little success, it did not know what to do with the success, and now it's in a world of hurt. The biggest problem is that AMD cannot go back to the old days. It has way too much debt and too many outside investors now that did not exist before. In the old days, AMD could lose money, lay people off, and survive until things got better. It has much less flexibility now, unless you are talking about a buy-out or firesale of the fabs & IP. Right now, K10 had better be all that and the kitchen sink or AMD as you know it will not be around for the long haul.


Oh, and Oneexpert you could really listen to the above advice about being realistic, especially when you pull numbers out of your ass on performance/power/anything else... have you ever considered working for Kim Jong Il? You'd be great at telling starving North Koreans how great the Dear Leader is and how horrible it is to live in South Korea where they 'force' you to eat every day!

10:10 AM, July 15, 2007  
Blogger Amdzoner said...

http://www.theregister.co.uk/2007/07/12/fujitsu_siemens_primergy_tx120/

"Fujitsu Siemens says it has launched the world's smallest, quietest and most cost efficient tower server."

"The system can be outfitted with a 1.866GHz Intel Xeon dual-core processor .."

10:17 AM, July 15, 2007  
Blogger Evil_Merlin said...

Ph(ake)d.

The only thing keeping Intel alive is the fact they are making money hand over fist, something that AMD isn't doing (in fact they are doing just the opposite).

You really are a fucking moron aren't ya? You really are a deluded fanboi that lives in some type of dreamworld.

I'm still waiting for your claim that AMD is going to have the K10 out in Q2 (wait, we are close to the middle within Q3 now are we not?)

By the time AMD optimizes its 65um products, Intel will have already moved well on to the 45um process and started transitioning to 35um.

In a nutshell, AMD is fucked right now.

12:19 PM, July 15, 2007  
Blogger slack_comp_user said...

Been working most of the day, this has cheered me up no end. Thanks for the laugh Shari :)

12:47 PM, July 15, 2007  
Blogger The Burninator said...

DO NOT BELIEVE PRO INTEL LIER!

There is none of problem in the AMD productions! All K10s are clocked at 4Ghz while using 1 watts of powers!

Burninator has seen many lies against the AMD:
1. People saying Penryn must be at 3.5Ghz to be beating K10 at the 2Ghz. THIS IS A PRO-INTEL LIE. Penryn is evil vapor-chip that is not real. It is well known fact that Intel CPUs only work when in same room as AMD CPUs since they steal. AMD chips power all machines, so Penryn can never beat any AMD CPU in any test ever.

2. People say Barcelona will come out on time as always scheduled by the Hector in August. DO NOT BELIEVE THIS INTEL LIE. K10 is already completely available. Were not super-success machines shown destroying evil Intel in the April at the Computex proceedings? This 100% super-proof that AMD is already in 100% success. Any sites that deny the success are only Intel PUmpers.

3. Liars who say that Barcelona is at the 2GHZ since that is all needed for beating Intels. WRONG intel fanbois! A Barcelona at only the 1Ghz is still 10x generations better than any Intel CPUs, and has 4x quad-true-power! So a Barcelona at the 1Ghz is better than Intel fake-chip at 40Ghz!! Anyone who says otherwise is the fanboi!
Burninator already give real truth reason: The primitive softwares that Intel pumpers forced on CPUs with evil monopoly cannot handle true power of Barcelona 10x lead. So Hector is only putting chips at 2Ghz until the softwares are made super-perfect by AMD technicians. Once softwares are fixed, all K10s will clock at 8Ghz with 100% passive no-fan needed!

4. Some Intel pumpers are on this site, but they pretend to like AMD! TRAITORS TO CAUSE YOU WILL BE PUNISHED FOR LIES!!!
ONEXPERT!! You shall pay for anti-AMD LIES!! You say AMD makes CPUs that are using 45 watts of the electricity? TOTAL LIES!!! Everyone who is non-pumper knows that no AMD CPU has ever used more than 1.5 watts of the electricities and that is only because of evil Intel Monopoly stealing cycles!

You say Core 2 is the P3s with glue? You pro-Intel traitor!! Every smart-non-evil persons is knowing that the Intel P3 is only a LIE for stealing from the AMD chips! Core 2 is not real, you are just pumping Intel.

You say that Intel laptops use 800watts of the electricity? You sicko-fanboy! Burninator has 100% of the proofs that ALL Intel chips must use 10000 Watts of the powers to turn on! Anyone who says any Intel chip uses less is a pro-Intel fanboy TRAITOR!

Last point, for this you must die: How dare you say that the IBM pumper chip is the fastest? Burninator already prove that the K10 is the 8Ghz fastest! You are paid IBM pumper too liar fanboy! And then you talk of C7 chips? You are traitor to AMD!

1:30 PM, July 15, 2007  
Blogger Evil_Merlin said...

thanks for the laugh burninator!

You are a jem in this place!

1:39 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

chuckula

"Here's a big ol' dose of vitamin Truth"

Curiously, four or five half truths do not add up to even one whole truth. How about a little accuracy with those "truths"?

"Intel was in its boneheaded-stupid period, which ran roughly from 2003-2005. Now I know the Core2 did not actually come out until 2006, but by 2005 Intel internally had finally gotten the message that Prescott & Netburst were stupid"

Wrong. C2D would have taped out in 2005. Actual development had to have started in 2003 or even 2002. It seems most likely that deveopment began before Pentium M's release because the security for Pentium M was unprecedented. This made no sense for a niche mobile processor but does make sense if it included the C2D design.

Intel undoubtedly knew about Prescott's problems in 2002 but finally gave up on further development in 2004 (Tejas and Nehalem). It would be my guess that this design work was reassigned with part becoming Tulsa and whatever was useful going into Nehalem II.

"Meanwhile, AMD had perpetually been the underdog for decades"

Wrong again. Up to 80486, AMD was only a second source for Intel. AMD did no development on x86 processors and merely manufactured designs that were 100% Intel's.

"Then came the K7 where AMD chips were first able to actually beat higher-end Intel CPUs"

Wrong. K6 was able to outrun PII in spite of the fact that PII ran on Slot 1 while K6 was handicapped by Socket 7. What saved Intel was not higher speed but SSE.

"finally the K8 which was an excellent michroarchitecture (after all AMD bought all the design talent for the K7/K8 from Digital who had made the Alpha)."

Not really. The K7 used the Alpha bus from DEC but nothing else in the design. You seem to be forgetting that K7 was AMD's 3rd non-Intel architecture: K5, K6, and K7. K7 would have included lessons learned from both K5 and K6. AMD needed the Alpha bus because Intel had moved to a proprietary standard and Socket 7 was too limiting. We should mention though that it was from the Alpha bus that AMD got Lightening Data Transport used in K7 which became HyperTransport in K8.

" The K8 coupled with Intel's stupidity meant very good times for AMD from late 2003 until 2006."

Wrong again. AMD hit its lowest financial point in 2003. AMD will match this low point if it loses another $1.5 Billion. AMD's growth was also very slow in 2004 as AMD was still making mostly K7's in Q1 04. AMD's growth did not pick up until 2005.

"they went off trying to design some overly exotic super-threaded chip that they could not even get to perform well in simulations (sounds a bunch like Itanium"

No. That sounds a lot like Intel's canceled Whitefield project. Itanium came after two failed projects at Intel plus a low performing VLIW architecture. I think what you don't understand is that AMD had to dismantle its 29000 team to create the K5 design team; this was AMD's only design team. This design team was then added to with NexGen engineers although minus the lead designer.

It seems that AMD did not have enough design staff to have five separate projects going at the same time as Intel did. Pentium M had good success in mobile but only limited success on the desktop culminating with Yonah (Core Duo), P4 did allow Intel to fill in first with Presler (dual core server) and then Tulsa (4-way dual core), Itanium has had many delays, and Whitefield was canceled which also killed the planned release of CSI in 2007. Only the C2D project turned out well enough to pull ahead of K8.

"K10 is the rushed replacement"

K10's architectural changes compared to K8 are very, very similar to C2D's compared to Pentium M. Does this mean that C2D was rushed as well?

"so why the hell is it taking so long to get K10 out the door, shouldn't it have been a 1 year project tops?"

Are you joking? The normal development cycle for a new architecture is 4 years. Major K10 development would have started in 2002 and ended when Barcelona taped out in 2006. Barcelona would have been in minor development since then for any necessary fixes. The current timeframe for major development should be 2009 with 2008 releases nearing tapeout.

"They could go right out and build a new fab at 45nm and be competitive with Intel on process technology"

What? It takes years to build a new FAB. If AMD had begun building a new FAB in late 2006 it would have been ready no sooner than late 2009 (or 2010 if on a new location like NY). Secondly, FAB 36 is a 45nm FAB. A third FAB would have the same process technology as FAB 36. I suppose extra money could have helped convert FAB 30 to 300mm sooner but right now AMD has more capacity than it can sell. And, if by some miracle AMD's demand skyrocketed and they needed more capacity, they could always get more from Chartered (which is producing at the contract minimum right now). I can't honestly think of a single problem that would have been fixed by buying a third FAB.

"AMD decided to purchase ATI.... once again, instead of focusing on the next step AMD got confused by actually having some cash to spend and had to 'diversify' into an area where there was no business case to expand."

This isn't quite true. ATI is now developing an all new mobile chipset to go with AMD's new mobile processor for 2008. AMD needs this mobile chipset to compete with Centrino but it wouldn't have happened without the purchase. Also, the 690G chipset is working well and is actually ahead of both Intel's and Nvidia's chipsets.

" The trainwreck with the r600"

Indeed. Unfinished drivers and leaking 80hs transistors on TSMC's 80nm process. This will probably look a bit better in Q1 08 when ATI is able to move to a better process at TSMC (65, 55 or 45nm) and drop those not so great 80hs transistors.

"Well, while there are some interesting aspects of 'fusion', AMD could just as easily have signed a development contract with ATI for a couple of hundred million that would have had better results because ATI could focus on what it does best and not be killed in a takeover."

You've got this backwards. ATI was short of development money on its own (which is why R600 looks the way it does). AMD has boosted ATI's resources in spite of its own financial problems. Fusion is long term, probably 2009 for full benefits. ATI's benefit is much sooner.

"AMD got a little success, it did not know what to do with the success, and now it's in a world of hurt."

No. AMD didn't have enough design staff to get X2 out the door and then add RAS and Pacifica to Rev F along with a new DDR2 IMC and still have enough people left over to work on both K10 and core upgrades for Rev F. Intel on the other hand, had enough for five simultaneous projects.

"The biggest problem is that AMD cannot go back to the old days."

Really? Their DTX 2007 strategy looks almost identical to the K7 strategy back in 2002. It is also highly reminiscent of Intel's release of ATX. Intel dropped its succesful ATX strategy when it released BTX. This might leave enough of a gap for AMD to exploit.

"It has way too much debt"

Less than it had in 2003.

"In the old days, AMD could lose money, lay people off, and survive until things got better."

When did AMD lay people off?

"unless you are talking about a buy-out or firesale of the fabs & IP."

I can't think of any company that could buy AMD. Motorola has been out of the front line processor game for years (since it stopped developing Power for Apple) and IBM can't sell processors when it also sells systems.

An actual breakup of AMD is theoretically possible but unlikely since this would upset a lot of companies, not only Cray, Sun, and IBM but Apple and Microsoft as well.

"Right now, K10 had better be all that and the kitchen sink or AMD as you know it will not be around for the long haul."

AMD does need to avoid serious financial problems through the rest of 2007 (like losing another $2 Billion). AMD should get some help on at least the low end desktop with DTX and the biggest help from being able to offer quad core processors to compete with Clovertown and dual core server chips with faster SSE. AMD will get another boost in mobile but not until 2008.

It is a good question what speeds Penryn will be offered in first. If Intel truly can get 3.2 or 3.33Ghz chips out the door in Q4 07 or Q1 08 this should keep them in the lead on the desktop top end and also server top end if the power draw stays down. If Intel is not able to offer higher clock speeds right away they also have the secondary strategy of SSE4 much as they did back with PIII and P4.

3:09 PM, July 15, 2007  
Anonymous Anonymous said...

The highest clocked K8 on 65nm is 2.7GHz, not 2.6 as you claim. You should update your blog post.

3:34 PM, July 15, 2007  
Blogger Evil_Merlin said...

Once again, sharidouche shoots his moronic piehole off and digs his hole even deeper.


So Ph(ake)d, when Intel doesnt go Bk in 2008 Q2, what are you going to do, keep shifting your dates around?


If all Intel does is copy, why isn't AMD at 65um yet? Why isn't AMD tranitioning to 45um like Intel already is? Why does AMD feel the need to copy stuff like MMX? SSE (in all its various flavours)... why does AMD even exist? Oh yeah! They copied Intel...

Again for all the fucking stupid bitching you and your fanboi crowd do about the FSB of the Intel CPU's they sure have no issue handing AMD it's ass and saving power under load... go figure.

Of course you will never comment on it because it's only going to make you look like more of an ass than you already do to all but your little fanboi crowd.

3:35 PM, July 15, 2007  
Blogger Chuckula said...

OK scientia calm down a few points that I've grabbed at random:

1. I'll grant that the K6 was definitely competitive with the higher-end P2's in integer operations... however it was one hell of an unbalanced chip in that in floating point operations it was horrendous. AMD did not have a real floating-point setup until the K7.

2. The K7 and the Alpha 21164 have a hell of a lot in common. The DEC bus is just the external manifestation. Probably the biggest differences were that the K7 had an x86 decoding unit (obviously) to feed the internal engine, and that the K7 had SSE instructions which earlier Alpha designs did not have (I think one of the final Alphas may have had some form of vector instructions).
A few key features that K7 had from Alpha's heritage: 1. A true superscalar FP engince (why the K7 didn't suck for FP); 2. Multi-level cache that was on-die (in later models, also grafted onto the K6-III); 3. It was the first AMD chip to truly convert the decoded x86 into an internal simplified (RISC-like) macro-op structure. Since then all AMD CPU's (and Intel CPUs) have used this method for abstracting x86 away from the guts of the chip's design.

2. Financials are the results of previous decisions, they are what happens after, not before decisions are made. So when I say things turned around for AMD I am talking about the late 2003 shipment of the first opterons. It took time, but things turned around.

3. AMD most certainly did have a canceled design, it was called the K9, it had a whole crapload of parallel threads, and it would have eaten power while having horrible single-thread performance. check it out right here

4. Yes it takes time to build a fab. I already knew that. So did AMD in 2005 when it had 1. money; 2. a bunch of important decisions to make. Had the decision been made then, the 45nm fab could be coming online within a couple of months of Intel pushing out Penryns. Oh, and that Penryn date would probably be sooner since Intel would actually be scared.

5. It takes time to design chips! Yes, I already knew that too. One important thing you keep neglecting to mention is that AMD effectively said it had already designed the multicore chips in 2003! You work at AMD zone, don't you freakin' remember when the Opteron came out and everybody was talking about the memory crossbar and how making a dual core was just a manufacturing challenge? AMD claimed to have already done the hard work. Now I'm not expecting a quad core part in 4 weeks or anything, but seriously, if the AMD architecture is so freakin' amazing and already designed (according to AMD itself) why is it taking so long?

6. It takes a long time to design chips! (the Intel version) Yes I already know this. Once upon a time at Intel there was a chip code-named Banias that was designed for laptops (think centrinos). It took (guess what?) about 4 years to design & produce!. Meanwhile on the desktop, Intel was listening to marketing droids go on about the P4 and netburst.
So what Intel had was (in parallel): 1. a really good laptop chip; 2. a really shitty desktop chip.
As time went by the K8 chips started cleaning up and Intel was in trouble. There already was a next-generation to the earlier Centrinos in the works, but it was only for laptops. (It's called the Core, I know I have one in my laptop). It was the first true dual-core from Intel, and it had been in design for some time, but it was just an evolution of Banias/Dothan in many ways so it was not a radically different chip.
In early 2005 the shit was really hitting the fan, so Intel took the existing Core design, added on some features that would make it work better in larger-scale systems (more cache, 64 bit instructions, some tweaks to the TLB & opcode-fusion). And rushed out Core 2 as fast as it absolutely fucking could.
YES, VIRGINIA THE CORE 2 WAS RUSHED, EVERYBODY SAW IT. Now, Intel got really freakin' lucky since the Core2 was a spectacular performer and it also avoided the power issues of the Prescotts. Additionally, the original Core CPUs had been migrated to the 65nm process, so Intel was only taking a somewhat tweaked existing design, and manufacturing it on a process it had already gotten experience with.

The rest is history. Intel could have fucked up on the Core 2, there were not guarantees it would be great, but it turned out well and its the results that count not the tick-marks on a manager's milestone powerpoint chart. Intel got massively lucky that a much derided design team in Israel (the P4 guys used to insult the hell out of Centrino even while producing absolute shit themselves) was there to save the day.
AMD does not have the same ace in the hole.

4:05 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

Well, since more was added to the original article I'll comment on that as well.

Sharikou

"The x86 market is completely different today with AMD being the industry standard setter in server, desktop and mobile."

Not quite. Intel became a serious standards setter when it released the ATX standard. Intel has had PCI (which beat VLB), PCI-X, and PCI-e along with USB (which beat Firewire). It was clear that AMD had no standards clout when it released 3D Now only to have it smothered by SSE.

Intel was the sole standard setter on x86 right up to RDRAM when the market went with AMD and DDR. Then Microsoft backed AMD with AMD64. AMD has been able to maintain socket standards after the very slow start of Slot A and the slow start of 940 and 754. For some time, AMD's chipset was the only one available for K8 or K7. AMD also had the only dual FSB server chipset for K7 MP.

Now, it appears that AMD is finally getting enough support with compilers from PGA and it looks like its Pacifica version will get support (but it is still too early to tell). To be honest, the real proof will be if K10 sees a big surge in server sales in late 2007 and early 2008. It is obvious however that AM2 and F had plenty of support at launch unlike both Slot A and 940 so this is quite an improvement. However, Intel did not copy all of AMD's SSE4a instructions (much as AMD did not copy all of SSE3). AMD however is likely to copy all of SSE4. We will have to see if Intel's Geneseo does better or worse than Torrenza and whether CSI is preferred over HT.

" Whatever Intel does, it is just copying AMD's ideas."

You could argue that AMD copied a lot of Intel ideas as well. Certainly, Intel doubled the cache bus and beefed up the SSE execution units on C2D ahead of K10. Intel also used L3 back with Northwood Xeon.

"The only thing that is holding AMD back is its apparent difficulty with 65nm SOI process."

Which has been under development for 18 months. If AMD still doesn't have it figured out then that is a problem.

"In terms of architecture, AMD is years ahead of Intel."

Not years. We'll have to see if Intel's quad FSB chipset is competitive; it's dual FSB chipsets seem to be. Intel should have its own CSI and IMC by early 2009 for server chips. However, it has been suggested that Intel will retain the FSB for single socket chips. AMD's cache design seems a bit more balanced than Intel's but this may be offset somewhat by Intel's faster L1.

"Although Intel has copied AMD64 instruction set, NX bit and emulated multi-core,"

AMD's C2D chips are real multi-core, not emulated as they were with HyperThreading.

" Intel's architecture is stuck with the primitive FSB design."

Which may be fine even in 2009 for single socket.

"Once AMD rolls out Direct Architecture 2.0 on fast clocks, Intel will look retarded."

DC 2.0 is good up to 32-way. However, if the great majority of the market stays at 4-way and below and Intel's quad FSB chipset works well then AMD won't have much of an advantage on non-HPC systems.

" There are rumors that Intel will copy more from AMD, such as distributed memory architecture, direct connect and embedded memory controllers."

It looks like Intel will use an IMC on 2-way and up and communicate with CSI. However, it also looks like CSI will use an external hub chip for packet routing. CSI is a lot less sophisticated than HT but it might very well be faster. CSI is not really copying AMD as it is more of an upgrade of PCI-e.

"But those are still on the paper."

Intel has been working on CSI for quite some time, at least since 2005. I'm sure they have more than paper. I would say they have gone through simulations and must have some prototype hardware.

"AMD got everything in place to finish Intel."

Except money, resources, design staff, and FAB capacity. We'll have to see if AMD can keep up with tick tock with its own modular core. We'll also have to see if AMD can keep up with two separate processor families now that it has a separate mobile line. AMD should get some benefit (faster process ramping and more flexible production) by having two similar 300mm FABs once FAB 38 is running. This should make them a little more efficient than Intel is with its distributed FABs.

However, even with a third FAB AMD would not have enough capacity to drive Intel out of business so it certainly won't happen with two.

" The only thing it needs to do now is to get 65nm SOI to work faster."

And get production of K10 ramped as quickly as possible. AMD had originally projected half of all server chips within two quarters. This is probably doable but how quickly can the desktop chips ramp? Also, will AMD fall behind on the 45nm process schedule? It's not going to help if AMD is ramping 45nm 10 months behind Intel.

4:09 PM, July 15, 2007  
Blogger Chuckula said...

This comment has been removed by the author.

4:10 PM, July 15, 2007  
Blogger Chuckula said...

Oh, and for those of you who think that AMD's recent layoffs were the first time this has ever happened:

AMD LAYOFFS APPEAR LIKELY

Source: Mercury News Staff and Wire Reports
Advanced Micro Devices Inc. President W.J. Sanders III all but confirmed that AMD will lay off workers next month. Sanders had previously announced that AMD would abandon its no-layoff policy. Speaking Wednesday at AMD's annual meeting, he said: "We have not defined yet that we will have a layoff. (But) I do believe we are going to have to reduce our work force."


Published on September 11, 1986, Page 1F, San Jose Mercury News (CA)

That's a blast from the past!

4:41 PM, July 15, 2007  
Blogger Unknown said...

When did AMD lay people off?

430 people a couple of months ago.

http://news.com.com/8301-10784_3-9718202-7.html?tag=head

4:54 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

chuckula

"A true superscalar FP engince"

Okay, you are running together unrelated things. You seem to be suggesting that AMD didn't know how to build superscalar FP until they talked to DEC. This is nonsense. You simply can't do superscalar FP with the legacy x87 instructions.

Since SSE didn't exist when K6 was designed, it naturally would not have superscalar FP. Adding this was a response to SSE and had nothing to do with Alpha.

"Multi-level cache"

This again has nothing to do with Alpha. Multi-level cache goes back to mainframe architecture. Secondly, all companies added more cache as the manufacturing process allowed more transistors. Look up the progression of Motorola's 68000 series for example.

"It was the first AMD chip to truly convert the decoded x86 into an internal simplified (RISC-like) macro-op structure."

This is wrong by two generations. The K5 was in fact was the first AMD processor to convert x86 instruction into RISC-like ops. K5 was built around AMD's RISC 29050 core. Secondly, even though K6 was designed by NexGen and not AMD, it too used RISC-like internal ops.

"So when I say things turned around for AMD I am talking about the late 2003 shipment of the first opterons."

The first Opterons shipped in Q2 2003, not late 2003. AMD was also already producing 754 chips but only came out with socket 939 chips in "late 2003". Again, this did not turn things around. The 130nm SOI process had terrible yields while the K8 die was twice the size of the K7 die. Consequently, K7 stayed in production into 2004. It took a year after the launch of K8 before AMD was gaining ground. AMD's stance in 2004 improved as well because of the FAB 30 expansion and getting the 90nm SOI process up and running.

If you are only looking at architectural wins rather than revenue you could say that AMD got some Opteron based HPC systems out in 2003. However, AMD did have previous HPC systems with K7 MP as well.

"AMD most certainly did have a canceled design"

I never said they didn't. I said that it would be more properly compared with Whitefield which was likewise overly ambitious and never released rather than comparing with Itanium which was released and has had some success.

" Yes it takes time to build a fab. I already knew that. So did AMD in 2005 . . . Had the decision been made then, the 45nm fab could be coming online within a couple of months of Intel pushing out Penryns."

You still seem confused. AMD started building FAB 36 before 2005 and it is a 45nm capable FAB. You have yet to explain how anything that AMD could have done in 2005 would have been faster than what AMD is doing right now with FAB 36.

"AMD effectively said it had already designed the multicore chips in 2003"

You don't know your history. K8's dual core design goes back to 2001 when AMD first showed working 800Mhz prototypes. However, in 2003 AMD engineers would have to have been in major development for Revision F including Pacifica, RAS, and a new DDR2 memory controller. They would have had some personnel doing minor development work on X2.

" everybody was talking about the memory crossbar"

The crossbar in K8 connects: Memory Controller, HyperTransport Controller, and L2 cache. It doesn't connect to memory. The original design included a port that was used to connect to a second core.

"In early 2005 the shit was really hitting the fan, so Intel took the existing Core design, added on some features that would make it work better in larger-scale systems (more cache, 64 bit instructions, some tweaks to the TLB & opcode-fusion). And rushed out Core 2 "

This is completely false. Core 2 Duo was not developed in 2005 from Yonah. Core 2 Duo had to begin development no later than 2003 but more likely 2002, before either Dothan or Yonah were released.

5:02 PM, July 15, 2007  
Blogger Frank said...

Whack
whack
whack


DC 2.0 is good up to 32-way.


So? is X4. Nehalem will extend that to 512.

5:05 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

Oh, you are referring to layoffs in 1986. Okay. I was talking about the 2002-2003 period which was AMD's worst in terms of revenue. In Q2 and Q3 2002, AMD lost 70% of its revenue. I don't think that happened at any other point in AMD history.

5:06 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

Yes, the layoffs in 2002 make sense as that was AMD's lowest period. That suggests that AMD's volume dropped sharply as well. I wouldn't count the layoffs resulting from closing FAB 14 and 15 in 2001 though.

Frank

You seem to have gotten my point about DC 2.0 backwards. I was comparing it to the quad chipset rather than Nehalem. However, if you want to compare it with Nehalem or with X4 then DC 2.0 allows glueless 32-way while both X4 and Nehalem require expensive external chips. If I were counting connections with additional chips then Opteron can already go above 8-way with chips from two different manufacturers.

5:18 PM, July 15, 2007  
Blogger Chuckula said...

Alright Scientia, I'll grant you whatever you want to say about how the K7 is not related to the Alpha, fair enough.

However, I still cannot for the life of me see where you get the idea that Core 2 is a completely different design than the original Core chip. That makes no sense not only from a technical perspective, but from 1. history, and 2. the very fact of how short a time the original Core was the primary laptop chip Intel sold.

Here is the absolute earliest date Intel could have started differentiating Core 2 from Core: May 7, 2004. What is so important about that date? Well, that's the date that Tejas (codename for the next generation Intel Netburst iteration) was taken out back and shot. I'm not an Intel fanboy... I sure as hell don't think they already had their act together on that exact date to get Core 2 produced, I think they wallowed around a bit before hitting on the idea to take the (then in development) Core and put it on steroids.
I'm not arguing that the original Core was probably in the design pipe from sometime in 2002, but that doesn't mean Core 2 was a separate design from an early stage at all, in fact, the similarities between the chips are so great I'd be shocked to find out Core2 was some massive separate effort.

So, from a historical background, Intel was in a PANIC when all it had was Netburst and AMD was cleaning up. It had a whole bunch of designs that it basically trashed (there was Tejas, Nehalem and both of their Xeon counterparts (the new Nehalem is a totally different chip with the same codename)).
There is no way I'm going to believe that historically, Intel had a completely different desktop/server design in the pipeline from 2002 for the Core 2.... ain't happenin'.

The next thing is to look at what Intel actually sold. From wikipedia:
Core Duo was released on 5 January 2006
On July 27, 2006, Intel's Core 2 processors were released.

THAT IS LESS THAN 8 FREAKIN' MONTHS FOR 2 ARCHITECTURES YOU CLAIM ARE MASSIVELY DIFFERENT There is no way in hell Intel took a completely different architecture and shoved it out less than 8 months. What it was doing instead was putting out the Core as a short-term stopgap (It's a fine laptop CPU, no need for 64 bits, not too power hungry), and also used the experience with 65nm fabbing to help get yields of the Conroes and Meroms up to speed.

The last point is: Just look at the designs! The Core 2 is not really radically different than the original Core. Sure there are different features, but nothing that could not have been introduced in the order of 18 months (late 2004/early 2005) onto a design (the original Core) that was basically done. Most of the stuff added on is pretty safe. In fact, look at the 2 most complex instruction features: 1 64 bit instructions; 2. Opcode fusion. Well, 64 bit instructions actually aren't that difficult to add especially when Intel already had experience with them. Opcode fusion is cute don't get me wrong, but its really just an advance on the instruction decoder and internal macroop language (and its real-world benefits are usually around 5% not gigantic). And finally.... Opcode fusion is turned off in 64 bit mode so no having to worry about 2 new features screwing with each other! Tacking on cache and the other tweaks Intel added were much more manufacturing issues than anything related to design, Intel can design cache in its sleep.
I remember when the new chips came out a bunch of sites questioned why it should even be called the Core '2' at all.

My point is.... Intel rushed the Core2 out as fast as it possibly could. This was not a cool calculated long-term plan, it was a response to AMD. Now at this point, Intel has gotten breathing room and can actually make proper long-term plans. From all appearances Penryn is going to be great. Nehalem is more of a wildcard... it could be great, it could end up being a disaster, at this point we just don't know enough to tell.

One very final point: You seem to love to point out how Intel has lots of design teams while poor AMD is stuck with only one. Why didn't AMD invest in more design teams rather than trying to run out and buy ATI? Intel's parallel design schemes are working great now (they did not have the true parallel design going on during the Prescott era, just the dumb luck to have a brilliant laptop division). There's no reason AMD couldn't have saved $5Billion and put in for more design staff instead.

5:41 PM, July 15, 2007  
Blogger Sharikou, Ph. D. said...

My point is.... Intel rushed the Core2 out as fast as it possibly could.

All Core 2 had was about 10% IPC lead over K8, which could be easily overcome had AMD been able to crank clockspeeds on 65nm SOI. The 3GHZ X2 6000 is a good part that can frag Con E6600 in most tests. The good news for Intel was that AMD is having a hard time tweaking 65nm SOI. Today, the Opterons are still made on 90nm...

Once AMD nails the 65nm process, Patty will be begging in the streets unless he unloads his options right now and save some.

6:31 PM, July 15, 2007  
Blogger GutterRat said...

Sharikou,

Did the meds wear off yet?

At least Scientia was man enough to admit that he might have err'd in his posts.

Are you man enough to admit that the only BK you will see if AMD's BK and not Intel's?

6:45 PM, July 15, 2007  
Blogger george said...

You should post about Theo de Raadt. posting about Corpse 2 duover bugs. nice preformance but it is buggy as hell.

7:10 PM, July 15, 2007  
Blogger Chuckula said...

OK Sharikou.... I stand by my statement, but you should have also included the fact that I think the C2D is quite a bit better than any Athlon out there. Like I said, Intel may have gotten it out quickly... but it's an awesome chip.
If AMD had some of the urgency now that Intel had back then things might not be as bad for AMD right now.

7:47 PM, July 15, 2007  
Blogger Evil_Merlin said...

wtf? He STILL seems to think Intel is going to keep using 65um...

Um, you do understand that Intel is already starting the transition to 45um. Just the die shrink alone has shown 10-15% increase in CPU performance. Which is another 10-15% CPU performance to club AMD back into the stoneage, because Intel with its ancient bus and glued together P3's are already kicking AMD's ass silly.

IF that was the ONLY thing AMD had to worry about... Alas, Intel doesnt think sitting still is such a good thing and has several new CPU's in the pipe all of which seem to be releasing about the same time AMD gets off its ass and starts selling Bacelona's in quantities...

Too bad for AMD...

9:42 PM, July 15, 2007  
Blogger Unknown said...

AMD admits that they may very well run out of cash and cease to be competitive against Intel and Nvidia:

We cannot be certain that our substantial investments in research and development will lead to timely improvements in product
designs or technology used to manufacture our products or that we will have sufficient resources to invest in the level of research and
development that is required to remain competitive.


AMD also successfully predicted what would happen to their creditability if they delivered parts that were late or underperforming (R600, K10):

If we are delayed in developing or qualifying new products or technologies, such as what occurred with the multiple delays
in the launch of our R600 GPU for the high-end category of the PC market, we may lose credibility and our competitors may be able to take
advantage of these delays by launching higher performing products before we do, which could cause us to lose market share and force us to
discount the selling price of our products.


All this from AMD's Form 10-K.

AMD BK Q2'08.

9:53 PM, July 15, 2007  
Blogger Unknown said...

AMD, in more detail, admits they are heavily in debt and are running out of cash and may not be able to borrow more cash:

As of December 31, 2006 we had consolidated debt of approximately $3.8 billion. In addition, a significant portion of our consolidated
debt bears a variable interest rate, which increases our exposure to interest rate fluctuations. Our substantial indebtedness may:
• make it difficult for us to satisfy our financial obligations, including making scheduled principal and interest payments;
• limit our ability to borrow additional funds for working capital, capital expenditures, acquisitions and general corporate and other
purposes;
• limit our ability to use our cash flow or obtain additional financing for future working capital, capital expenditures, acquisitions or
other general corporate purposes;
• require us to use a substantial portion of our cash flow from operations to make debt service payments;
• limit our flexibility to plan for, or react to, changes in our business and industry;
• place us at a competitive disadvantage compared to our less leveraged competitors; and
We may not be able to generate sufficient cash to service our debt obligations.
• increase our vulnerability to the impact of adverse economic and industry conditions.
Our ability to make payments on and to refinance our debt, or our guarantees of other parties’ debts, will depend on our financial and
operating performance, which may fluctuate significantly from quarter to quarter,

9:59 PM, July 15, 2007  
Blogger Scientia from AMDZone said...

chuckula

K7 is related to Alpha. Not only did K7 use the Alpha bus but the project manager was from DEC. Saying that K7 got nothing from Alpha would be wrong but trying to paint K7 as some type of Alpha clone is wrong as well. It is entirely possible that K7 includes some smaller features gained from Alpha experience that are not visible in the larger architecture. K7 would include influences from K5, K6, and Alpha. K8 also shows parallel development with Alpha including the IMC and HT. It is clear however that the two chips were not closely related since the Alpha included both DSMT and an onboard router which K8 did not. I suppose it is possible that Alpha's DSMT could have influenced the original K10 design which was canceled.

As far as the C2D heritage goes you seem to misunderstand. When P4 was released in 2001 there was a project to design cheaper Celeron PIII's. However, Celeron began using the P4 core so this work was stopped. The Haifa team then took this work and adapted most of it to a low power mobile design which became Banias. Banias was rushed. This is why they were able to make improvements with Dothan. The final development was the dual cored Yonah.

Now, Intel did realize that Prescott had problems and realized this in late 2002, not 2004. Banias was well along at that point but not yet released. Intel saw the Banias work as promising so they put together a team to work in parallel and this became C2D.

The Indian team also looked at the Banias work and decided that they could put together the Whitefield version for release in 2007. Their version would have had very few architectural changes compared to C2D. Basically, it would have been like a quad version of Yonah stretched to 64 bits with CSI added. However, Whitefield was ultimately canceled because delays pushed the project release timeframe back into 2008 which was too close to Nehalem II.

Getting back to specifics. Core 2 Duo includes many of Banias' design features including micro-ops fusion and a dedicated stack unit. However, suggesting that C2D was derived from Banias suggests that it uses the same core which it does not.

The original Banias design had to be almost completely gutted to widen the registers to 64 bits (which none of the Pentium M line have). This meant that the Integer execution units also had to be widened but there were additional changes which made the Integer pipeline more robust and overcame the port assignment problems that Banias inherited from PIII.

The cache buses were doubled and a fourth decoder added. The FP pipeline was completely overhauled. Consequently, C2D includes little of the original Banias core.

4:59 AM, July 16, 2007  
Blogger Scientia from AMDZone said...

Giant

No need to rake through the information from Q1. AMD's webcast of Q2 will be starting in 2 hours. Recent information, good, bad, or catastrophic will be available shortly.

5:05 AM, July 16, 2007  
Blogger Unknown said...

That's quite a story, Enrique. I like stories. I like stories about pinatas. In fact, I like everything you have to say.

5:14 AM, July 16, 2007  
Blogger Scientia from AMDZone said...

I guess anyone else who wants to get the latest dirt on AMD's financials can get to the webcast at AMD Investor Relations.

It will begin at 10am ET.

5:23 AM, July 16, 2007  
Blogger Unknown said...

Thanks for the link Scientia.

9:18 AM, July 16, 2007  
Blogger Scientia from AMDZone said...

giant

You're welcome, but it turned out to be a waste of time since no questions were asked. That means we'll have to wait until Thursday, 5pm for the Q2 financials. I'm sure plenty of questions will be asked then.

8:08 AM, July 17, 2007  
Blogger Christian Jean said...

With a debate like this, I had to retype my URL just to make sure I was at the right blog :)

It's been a long time I havn't seen a 'normal' discussion/debate from the two sides of the camp.

Chuckula said...
Like I said, Intel may have gotten it out quickly... but it's an awesome chip.


I would agree with you Chuckula that the C2D is an awesome chip in the sense of being able to stay ahead of the Athlon, and overall that's what really counts for the consumer.

But in terms of architecture, I would have to disagree with you. I find that competing by adding massive amounts of cache is overkill. If Intel reduced its cache size to that of AMD and still beat it then I would probably at that time say that it is superior in architecture as well.

As for GIANT... taking AMD's cautionary statements (obligated by the SEC) and using that here is just pathetic! Why don't you post Intel's cautionary statements to see if they sound any better?

10:44 PM, July 17, 2007  

Post a Comment

<< Home