Intel will have to copy AMD again
Intel was forced to adopt AMD64 instruction set, followed AMD on dual core, followed AMD to open up bus, will follow AMD on embedded memory controller and direct connect, will follow AMD on true quad core....
But AMD simply keeps innovating. In two years, we won't have CPU anymore, we will have APUs-- Accelerated Processing Units. While Intel is blindly contemplating CPUs with 100 cores, AMD engineers do use their brain. The APU concept, much like the Torrenza concept, makes perfect sense.
Expect Intel to copycat that again.
51 Comments:
But aren't Intel going to BK(?) in 2Q'08 as you said?
APU will not come earlier than 2008, so how can Intel copy it, if they go bankrupt? Hm, that will be tricky.
Or have you changed your mind about when they will "go out of business"
APU will not come earlier than 2008, so how can Intel copy it, if they go bankrupt? Hm, that will be tricky.
I wrote "In two years, we won't have CPU anymore, we will have APUs-- Accelerated Processing Units." -- because Intel would have BKed by then. AMD is simply fragging Intel now. With 65 watt x2 3800+ selling at $130 with heat sink and fan, 75% of Intel's porcessors must be sold at a price below $90. As AMD cranks up volume, it will keep slashing prices, and deny Intel the market and profit.
"As AMD cranks up volume, it will keep slashing prices, and deny Intel the market and profit"
If AMD denies Intel's profit, why does it lower its C2 prices by up to 40%? Doesn't make much sense, especially when considerint AMD still doesn't have a high-end ansver to C2 and when singlecores and new "celerons" come out, it won't have an ansver to them also.
Btw, is it just me or are your posts getting lamer and lamer with each new one you make? In the Good Old Times it was at least fun to read them, lately you just post the same old thing for the N+1'th time.
why does it lower its C2 prices by up to 40%?
You are talking about Intel's planned cut in 2007. A double die Conroe has a total die area of 320mm^2, AMD's quad is much smaller than that. AMD has a major cost advantage because its chips only needs 1MB of cache.
Well, the notion that Intel will go bankrupt in 2008 is silly and is still based on exagerations.
However, there is an interesting aspect of this competition that is not silly. AMD today is the reigning champion in 4-way. With the new Direct Connect 2.0 architecture in 2008, AMD will bring this same competition up into 8-way and 16-way. We know that Intel will release a 4-way chipset in late 2007 and will probably have something reasonable for 8-way in 2009. The problem is that although this keeps C2D competitive it will also eat into Itanium's market.
So, Intel is in the unenviable position of either allowing AMD to stay ahead with X86 servers or actually helping to push its Itanium line out of the market.
IBM shares a similar position. Once X86 pushes Itanium out of the market this will begin to put pressure on IBM's Power series. So, IBM has the choice between partnering with AMD to prevent Intel from taking over the server market and helping X86 eventually obsolete its own Power chips. At least with IBM it won't happen quite as quickly. Itanium should begin dropping in 2009 while real pressure probably won't hit Power until 2011-2012.
"With 65 watt x2 3800+ selling at $130 with heat sink and fan, 75% of Intel's porcessors must be sold at a price below $90."
75% of Intel's DESKTOP processors which is less <<30% of Intel's business...but keep sticking with this stupid damn point because in 2 quarters you won't be able to use it anymore...might as well try to get as much mileage out of this as you can.
You really are an idiot because by the time AMD finally gets the capacity, Intel won't be making P4's anymore.
Your past capacity calculations/ projections are just laughable:
- 65nm is not giving 2X the capacity as you told everyone(look at the X2 90nm and 65nm die sizes!)
- you completely have ignored the capacity swing hit during conversion (i.e you cannot instantaneously change 200mm/90nm equipment into 300mm/65nm equipment - couple that with structural fab changes (factory HW automation) --> F30/38 will be running under 100% theoretical capacity ALL of 2007.
- factor in greater mix to dual cores, intro of quad cores, development and steppings for K8L, next mobile chip, CTI rev's on 65nm...
You'd be dangerous if you had even a passing understanding of factory capacity.
Tell us where is this magic capacity going to come from? While you keep talking about 200mm to 300mm conversion of F30 you ignore:
- the fact that current 200mm wafer starts are higher than what the mature 300mm fab will be able to do (thus you do not actually just get the 2.25 wafer area scaling...)
- It will happen AFTER P4's are EOL'd...(if not completely, then >90%)
Once again please illuminate us on why AMD is not able to get 50% dies size area reduction between a 90nm X2 and a comparable 65nm one. Do you know?
Also for the financial side of your question 65nm process costs more - extra metal layer, processing steps (strain, et al), SSOI wafer cost, additional critical litho steps, equipment depreciation (90nm has been depreciated). The question is how much the additional die/wafer offset the additional process cost? (especially if they are not getting 2X area scaling) - it is a net positive, but not as positive as you think (or hope?)
Out of your list 'AMD Innovations' I can only find one that Intel truly took from AMD; the AMD64 instruction set. The rest is utter nonsense; Intel was testing intergrated memory controllers well before AMD. The Alpha processor in 2003 was the first shipping processor with an intergrated memory controller. The IBM POWER4 processor was the first dual core processor back in 2000; five years ahead of either AMD or Intel. HyperTransport is just an evolution of the Alpha EV6 FSB, so it's hardly an AMD innovation. CSI will not be HyperTransport anyway, will it?
Sun has the 8 core UltraSPARC T1. Intel has quad core right now. Who's following who? AMD is still at least six months away from introducing a quad core processor.
Now... let's not forget that AMD copied Intel's entire x86 architecture!
"A double die Conroe has a total die area of 320mm^2, AMD's quad is much smaller than that. AMD has a major cost advantage because its chips only needs 1MB of cache."
You ignore yield considerations of MCM vs monolithic core and ability to bin split at higher speeds by matching up chips optimally (rather than monolithic core where bin is driven by lowest performing core...)
And have you looked at total cache on AMD quad core vs Intel quad core? Or do you not consider L3 as cache?
As for die size, Core 2 die size (dual core) = 143 mm2. Let me know if you are having trouble with google and I can provide a link or two or several hundred... How exactly do you get 320mm2 again?
I don't have a phd like you but I get 2 x 143 = 286mm2
And AMD's quad die size? Do you have a link? Based on the picture in the INQ it looks to be >250mm2... (of course you have to be able to count and do simple division and multiplication for this estimate). I haven't seen anything officially reported.
Hint count the # of die across the wafer and divide that into 300 and then do the same thing in the other direction and then multiply the 2 #'s toghether (you will have to take your shoes off for the counting as there are more than 10, you can use excel for the multiplication and division)
A question - when you are just making this crap up, is it your hope that noone will know any better than to provide accurate data or that they'll just take your words for fact?
One WISE MAN Said:
"It does not matter who invents it, it matters who sells it"
And yes, your arguments are recycled blabbles
2007 projections:
- K8L will be a total failure after being late for several quarters
- AMD will frag itself for over promising and under delivering
- nVidia will eat the rest of ATI's lunch
- AMD will have market share of 10% but will not go bankrupt, yet
- Sharikou gets fired from his minimum wage job, again! He will stop posting waco stories because he has to sell computer for a burger after his mom kicks him out of the house.!
- K8L forces Intel into chapter eleven
- AMD is still capacity constraint
- Nvidia is crushed into oblivion by the r600 and its followers
- Sharikouwasisandwillbeallthewayright you Intel-Nvidia-Fanboaheys
- Nvidia buys Intel in a Looserswedding
Tiger Direct's top 10
1. Intel Core 2 Duo E6600 2.40G
2. Intel Core 2 Duo E6600 2.40G
3. Intel Pentium D 830 3.0GHz /
4. Intel Core 2 Extreme QX6700
5. AMD Athlon 64 X2 5200+ 2.60G
6. Intel Core 2 Duo E6700 2.67G
7. Intel Core 2 Duo E6400 2.13G
8. AMD Athlon 64 4000+ 2.40GHz
9. Intel Core 2 Duo E6300 1.86G
10. Intel Pentium 4 517 2.93GHz
Looks like Intel is in real trouble
Out of your list 'AMD Innovations' I can only find one that Intel truly took from AMD; the AMD64 instruction set.
That's what's funny - AMD stole this from Intel. People act like AMD made some giant leap and really "innovated" when they released 64-bit extensions.
Yeah. Right.
Hey. AMD. 1986 called. It wants its idea about simply adding some new instructions to double the register and addressing width back. It's called "Intel i386".
Crikey, Intel did the same thing when going from the 286 to the 386. This is not innovation, it's yet another example of AMD copying Intel's ideas. They capitalized (and I give them credit for it) on Intel's mistake in not doing it first, though.
Sharikou is right again. Intel BK in 2Q'08...
Intel has a lack of ideas and it will copy AMD's technologies once again; It is like a blind "whale". And BTW: K8L is the future... Start flaming ;)
I love all the BK, and frag this and frag that nonsense....highly accurate and immensely constructive.
It's clear that few really understand or want to admit that Ma and Pa just don't give a crap what CPU is in their system....cause either works just fine. Combine this with a group of OEMs who will benefit from a strong AMD and a more humble Intel and there's really no way Intel doesn't get hurt in the next year or two. Conroe can't stop this and a 45nm conversion won't help any.
The real question is: Can Intel turn itself around before its too late?
It has/had the talent, although that talent is fleeing as fast as it can, leaving behind the politicians and wanna-be's. Morale is worse than ever at Intel and they still haven't figured out that the system is broken, filled with a bunch of 'managers' that don't have a clue how to improve things and even less willingness and ability to make decisions.
Intel's best bet is for someone with a lot of weight in the finance world to wake up and start pushing the Intel BOD to cut Otellini and his group of merry idiots...and oh yeah, don't forget the conductor, Barrett. That company needs "Leadership" and there's none in site right now.
I don't cherry pick stories but since you seem to be on the lookout for anything negative for Intel you might want to look at ViiV.
They are suggesting that ViiV is pretty much a flop. However, they don't see Live as a stunning success either.
Intel copied AMD on the 64bit extension purely for marketing purposes. But Intel has always been correct in saying we still don’t need it. The only thing AMD64 did is sucker people in buying making them think it is “future proof”. I really wouldn’t tout this as a major achievement furthering the advancement of computing. It contributed nothing and quite frankly it’s a waste of money.
Everything is a copy from something. It is pointless to play this childish game of who copied what when we know that technology decisions are made with the overall business in mind. The important thing is who is on top TODAY because of those decisions. It’s not AMD,
"It does not matter who invents it, it matters who sells it"
Exactly and with another posters comment, you've pretty much acknowleged that Sharikou is right!
He never said that AMD created/invented some of this technology, they just inovated by integrating and using it.
If all this technology was actually out there for the taking why didn't Intel capitalize on that... and sell it first?
Because they are a lazy, stupid, slow, incompetant, non-inovating, bureacratic and monopolistic company!
AMD on the other hand is... BRILIANT!!!
And a lot more to come from them, you'll see :)
Now... let's not forget that AMD copied Intel's entire x86 architecture!
Your wrong! Allow me to give you a little history lesson:
There was a great company called IBM and they saw a new market... personal computers!
So they approached Intel and said, we'd be interested in buying MILLIONS of your processors from you to build personal computers!
Intel replied "but your mistaking... we don't produce CPU's, we only have calculators here at Intel, sorry!"
IBM replied "whatever, do you want the money or not?". "If you do sign here, here and here. But there is one condition attached to this contract."
"What's that?" replied Intel.
"We'd like to hand over on a golden platter, all of your technology to a second source... AMD!"
[History fact: This clause was to prevent Intel from screwing millions of consumers. But even though IBM thoughtfully and legally attempted to protect you, Intel still managed to screw IBM, AMD and millions of consumers]
Without thought, Intel imedietly responded "no F!@#$!@# way... are you crazy?"
With arrogance, IBM responded "ok, we'll sign Motorola instead, we've got nothing to loose... bye!"
"Wait" yelled Intel, "where do we sign?"
[... and the rest is history ...]
"Intel was testing intergrated memory controllers well before AMD"
Oh really. And was also working on its Hypertransport version since 1992, so what?!?!?!
And IBM Power4 doesn’t do x86, I could provide you mouths of links of multi core processors from many companies even nvidia and ati.
But the bottom of the line is that Intel is not innovative, Double dual core is just a packing trick; see what Intel says about it on the anandtech site.
If AMD didn’t delayed 65nm (like every one did!) there would be already quad cores from AMD like you saw them first when AMD released dual cores.
Sorry, but I must have been absent in those times AMD was "innovating"...
"- Nvidia buys Intel in a Looserswedding"
Just can't stop laughing.
K8L is nothing now, and 'now' is what really matters. Let me spell it out to you.
Right 'now' Intel is kicking the crap out of AMD.
Right 'now' Intel is moving its outdated products successfully.
Right 'now' Intel's modern desktop processors are doing very well in terms of sales against the best of AMD's modern processors.
Right 'now' Intel's mobile processors are killing AMD's mobile processors.
Right 'now' Intel is eating into AMD's server market.
Right 'now' AMD has released one of the nastiest, crappiest processors in it's history - the 4 x 4. And most people see this as a failure.
Right 'now', AMD's 65nm chips are nothing special, relatively speaking.
Right 'now' direct connect, embedded memory controllers, and AMD64 mean nothing in terms of performance, since Intel's old ways of doing things still outperform all this new technology.
(When Vista becomes available, people may still balk at 64 bit processing. We'll see what sort of mess 64 bit Vista will make of people's current software and hardware configs. But I'm guessing that Vista will flop big time because it offers nothing in exchange for it's higher system demands aside from fancy graphics, which in my estimation serve only to make things more complicated for the average user who barely can handle cutting and pasting right now with XP. I'll give it a year and a half before Vista becomes accepted by the masses.)
I'm laughing at everyone who thinks AMD is the be-all, end-all. AMD has always been a meagre alternative to Intel, and is 'now' currently an alternative, and unless AMD works some bloody magic, they will continue to be nothing more than an alternative.
Right now conroes c2d use 22 watts of idle power while AM2 uses 7.5 watts of idle power.
Right now Anand reports that only AMD servers are a good buy because at 60% usage they beat the crap out of intels chips on power savings.
Right now intls sales are in a sliding decline while AMD is in a sales boom.
Right now my AMD AM2 5000+ beats the crap out of my e6600 on energy, bandwidth(mem), workload, and graphics.
Why buy power sucking c2ds from intel when you can have a energy effiecient AM2 saving you money daily.
Right now intel has massive platform problems.
Right now AMD has great and low priced platforms.
Gluing together 2 or 4 antique pentium 3s will never take intel to the future.
No matter how many model ts I glue together and supercharge, it still wont be a ford cobra.
AMD rose to number 7 chip seller last quarter and maybe number 1 next quarter as soon as intel stops counting chips made by others and sold by others as thier own.
" Scientia from AMDZone said...
I don't cherry pick stories but since you seem to be on the lookout for anything negative for Intel you might want to look at ViiV. "
I think the same with vPro, what..these days I only see like 5 vPro ads a day compared to the 100 a few weeks back ...lol
I wonder whose idea was platformization!
I wonder who bought another company to follow the platformization model!
Ooooh!, that must have slipped a slippery mind!
Look who's copying whom...
"As one of the first microprocessor manufacturers to innovate with lead-free bumping technology, AMD is continuing its track record of environmental responsibility and customer-centric thinking in how we produce our products," said David Bennett, vice president of strategic manufacturing and alliances at AMD, in a statement.
He's kidding right? One of the first? How long has Intel been doing lead free bumps?
"innovate with lead-free"? sure if innovate means being the 3rd or 4th company to do this and BUYING the technology...
The real scary thing is this guy is a VP! VP of spin?
Frankly, as a software developer, I'm extremely happy about AMD64; it allowed me to port my software to 64 bit Linux using cheap commodity parts. Previously the only 64 bit builds I could do were on expensive Sparc and PA-RISC hardware. The fact that I could do this with dual-core was even better, revealing threading issues in my code that would never have surfaced on a single-core machine. And all of this came about, over a year ago, because of AMD. Not Intel, IBM, Sun, or HP.
Crikey, Intel did the same thing when going from the 286 to the 386. This is not innovation, it's yet another example of AMD copying Intel's ideas. They capitalized (and I give them credit for it) on Intel's mistake in not doing it first, though.
For starters your comparison is over simplified to the point of idiocy. Not to mention you seem to have "forgotten" that Intel engineers looked at 64 bit x86 and concluded that it was impossible. Did you also forget that Intel said the public wouldn't be interested in 64 bit computing?
Hey pal, your village called, they want their idiot back.
Remember, It better to be thought a fool than to use ones keyboard and remove all doubt.....
You can spin this how you want but THE FACT remains that Intel was the first to develop and manufacture the x86 microprocessor, everyone after them copied it.
"THE FACT remains that Intel was the first to develop and manufacture the x86 microprocessor, everyone after them copied it."
I assure you, that in terms of design and technical merits, x86 is nothing Intel would brag about. Only Intel fanboys would.
The 32-bit extension was not much better, either. The AMD64 extension, OTOH, is truly good. (See, it's so good that even Intel adopts it.)
Mr Right Now when you say "Right 'now' Intel is moving its outdated products successfully." are you talking of Core 2 Duo processors?
"Right 'now' AMD has released one of the nastiest, crappiest processors in it's history - the 4 x 4. And most people see this as a failure."
Oh really?
Read this:
http://www.tomshardware.com/2006/05/10/dual_41_ghz_cores/page14.html
Intel does 474W with:
-1 Processor low end processor
-1 GPU
-2 banks 512MB of RAM.
AMD does 530W with:
-2 top of the line processors
-4 banks of 1GB of RAM
-4 PCIe 16x slots!
-12 SATA ports
-1 or (2 GPUs?) (not sure here)
-Great, great, great platform never seen on any desktop computer until now. A new era of computing!
You bark about the power consuming of this but forget that Intel had (have higher power consuming in all their CPU line this is just for the FX and it is for specially people, specific market. Intel had (has) the P4 pissoff (presshot) in all their market segments from mobile to server!
"-Great, great, great platform never seen on any desktop computer until now. A new era of computing!" (reference to 4x4)
It's never been seen before because the massive power supplies to run it never existed! Now that we have 1100W power supplies, bring it on!
How about a 4P version! I'm re-wiring my home and will be upgrading my CB's shortly...This way I can play 5 games, rip 30dvds, zip 500 files , while watching a HD movie all at the same time!
There's a reason multisocket is only on the server side. Realistically what need/problem is 4x4 solving on the desktop?
"You are talking about Intel's planned cut in 2007. A double die Conroe has a total die area of 320mm^2, AMD's quad is much smaller than that."
You have previously estimated AMD's quad at ~283mm2. If you take the true Core 2 die size of 143mm2 this is 286mm2 Si area for Intel Quad MCM.
Now factor in yield and sort benefits of being able to match 2 dual cores together as opposed to the monolithic design where as soon as one core is lower speed than the others it will need to be binned downward.
And before you start going off on cache yet again - I believe Intel's native quad core is planned to have 6MB cache. AMD's quad core is 4MB total between L2/L3....
Factor in AMD's 65nm scaling has not yet panned out (~33% instead of your planned 50%) and where again is AMD's Si die area advantage?
Who is copying the SSE4? come again?
Oh really. And was also working on its Hypertransport version since 1992, so what?!?!?!
No they weren't. CSI has been in development for the last few years only. They had processors planned with an integrated memory controller back in 1999. Go read up on the Timna processor. It was canceled because RDRAM was deemed to expensive to use with a low end processor like Timna, so there would have been no need for it. It still doesn't change the fact that the Alpha back in 2003 was the first shipping processor with an integrated memory controller. Who cares if it's not x86? The MIPS processor did 64bit back in 1991. That's 12 years before AMD. IBM had dual core processors 5 years before AMD. Both of those are facts; so you stop trying to give AMD credit for other people's work. :O
And IBM Power4 doesn’t do x86, I could provide you mouths of links of multi core processors from many companies even nvidia and ati.
But the bottom of the line is that Intel is not innovative, Double dual core is just a packing trick; see what Intel says about it on the anandtech site.
Here's a hint: No one cares if it's native or not. Intel will sell MILLIONS of quad core server processor before AMD has shipped ONE. In the meantime all they can do is babble about how you need a "native" quad core processor and show one running... task manager! If Intel was stuck with still only dual core processors, an AMD developed a quad core MCM part using two dual core CPUs you'd yap about the benefits of it, saying how far behind Intel is. Instead all you can think is "it's not native quad core"! AMD and Ati have not made dual core processors either. Where are you dreaming this up!? The only dual GPU part Nvidia has made is the Geforce 7950 GX2 that uses two separate GPUs on two different PCB linked internally to use SLI. Intel has native quad core processors on the stunning new 45nm process too, FWIW.
If AMD didn’t delayed 65nm (like every one did!) there would be already quad cores from AMD like you saw them first when AMD released dual cores.
Sorry. AMD did not release dual core processors first. Another company did five years before AMD. Do you know the name of that company?
Intel never postponed it's 65nm process, neither did TSMC. Maybe it's just AMD that is not up to the task of getting up and running on time. Hell, if it wasn't for IBM's help AMD would still be twiddling it's fingers trying to work out how to clock K8 higher than 1Ghz lolol. They delayed K8 for over two years. Just the same as K8 will be delayed and delayed.
Intel, OTOH, has had near perfect execution this year. They released the Core Duo for notebooks, then they managed to pull the Woodcrest/Conroe/Merom launch in from "sometime in Q3 at the earliest" to a staggered June/July launch. Now quad core processors; pulled in from February 07 to November 06.
Intel owns the mobile and desktop markets. Nothing can touch Core 2 Duo. In servers all AMD has left in the Opteron is the 4S and 8S space. It is so low end and pathetic that it cannot scale to 16S and 32S. In 1S and 2S space they get owned by Intel quad core.
I don't have the dough for FX or Extreme Editions, but I could easily see one in the home with AMD Live software suite.
- Record PVR HD shows in hidef
- Stream HD recordings to a second room on my XBox 360
- Play game on the PC itself
And even when I am out of the house, I still have enough headroom to do real time transcoding with all these "background" tasks running to stream off a decent quality vid of my recordings to my notebook on the road.
Not for everybody.. of course. But fits in with the connected vision that both AMD and MS have been touting.
Who cares if it's not x86? The MIPS processor did 64bit back in 1991.
I don’t processor is processor.
In the meantime all they can do is babble about how you need a "native" quad core processor and show one running... task manager!
Well for a guy that didn’t care that if its x86 or not is worried that intel as a quad core (double dual) FIRST over AMD. Even IBM, Sun, … is at front of Intel in that department. It seams that Intel has done a great achievement with the double dual core, like it used lots of engineers designing it.
Here's a hint: No one cares if it's native or not.
No one. You mean you. Because to me there are huge differences.
Here are some from Intel:
- Each die is a dual core unit.
- Faster time to market.
- Less Engineering resources.
- Two smaller die yield better than a double sized die .
- Share wafer starts with dual core
- Ability to match die for better bin splits
The only win for me is Faster time to market, all the others are good for Intel. Like I said if AMD released 65nm sooner you would see native X86 quad core first from AMD and the Faster time to market was in the garbage too.
If Intel was stuck with still only dual core processors, an AMD developed a quad core MCM part using two dual core CPUs you'd yap about the benefits of it, saying how far behind Intel is.
No I wouldn’t do that. I even would say that I would like to see AMD doing that. After all they have Hypertransport just for multiprocessors? How about do it on a single packing like Intel does?
Instead all you can think is "it's not native quad core"!
And it isn’t.
"The only dual GPU part Nvidia has made is the Geforce 7950 GX2 that uses two separate GPUs on two different PCB linked internally to use SLI."
X1600 is single core.
X1650XT is dual core
X1950Pro is triple core.
X1900XT is quad core by my standards!
Same die more processors (cores)!
Nvidia takes the same route, I can provide you links where nvidia and ati when asked about when they will go the Intel and AMD CPU route they all say their designs are already multicore, and they are!
Your example of the nvidia 7950 GX2 is plain stupid. That is multi processor!!!! Not multi core!!!!!!
You dont understand what it is an Multiprocessor board. Look at AMD 4x4 ideia and then look at nvidia 7950 GX2 and see the differences.
Sorry. AMD did not release dual core processors first. Another company did five years before AMD. Do you know the name of that company?
I bet it wasn’t Intel.
Maybe it's just AMD that is not up to the task of getting up and running on time.
Where are IBM 65nm processors then? AMD said wanted 65nm at mature yields. That delays what 6 months, or more? Intel 65nm processors production passed their 90nm just 2 moths ago.
Intel owns the mobile and desktop markets.
Yes 80% of the market. Most according to numbers from retailers are Celeron and PM on notebooks, P4 and PD on desktops and some Xeon 3.0 (netburst) at server.
Nothing can touch Core 2 Duo.
Specially the Celerons and P4.
Intel owns the mobile and desktop markets. Nothing can touch Core 2 Duo.
Yes there is the AMD Athlon 64 4000+ very good for 99$, or the FX55 for 129$. There is no equivalent in Core 2 at that price. And I bet it perform very well. Too bad review sites suddenly removed all Single core processors from Intel and AMD from the reviews. And there are AMD equivalents at some Core 2 Duo prices up to the E6600.
^Intentionally repeated. Since you like copy paste processors you have no problem in reading it twice.
"Realistically what need/problem is 4x4 solving on the desktop?"
The idea is too new to be out already that’s the problem.
There are no multi core games.
There are no 64 bit games.
There are no quad SLI capable games (i think).
There are no coprocessor to cope with the CPU and second socket.
Even the dual core processor I can’t do nothing with them, I don’t want to think what I would not do with one quad.
Very good to the guys that use 3D rendering applications I think. The rest 0% need that.
""Realistically what need/problem is 4x4 solving on the desktop?"
The idea is too new to be out already that’s the problem."
Virtualization.
Virtualization.
Inadequate memory support with only 4 DIMM sockets. 2GB DIMMs are expensive, a virtualization system doesn't need multiple PCI-e 16X slots, 10 USB ports, 12 SATA ports, and excess power usage.
And virtualization on the desktop is a tiny blip, at best.
"Virtualization"
On the desktop? You see virtualization as filling a MARKET need in the immediate/near future or do you mean maybe 2-3 years down the line?
Yeah I could just see people heading into Best Buy or online at Dell:
"yeah but does this have virtualization support?"
"I would like to go with the slightly less expensive X2-5000 which seems to use less power but I really need that 4x4, you know for virtualization!"
Please...
X1600 is single core.
X1650XT is dual core
X1950Pro is triple core.
X1900XT is quad core by my standards!
Same die more processors (cores)!
Who the hell made this crap up?
Each of the GPUs mentioned above are just that: a single GPU. Some are more powerful than others:- The X1900 XT has 48 shader processors while the X1950 Pro has only 36. There are differences in memory size, clockspeeds etc.
There is still only ONE core. From a lowly Geforce 2 MX to a Geforce 8800 GTX there has never been a dual core GPU, never. Some companies have placed multiple GPUs on the same PCB, but that does not make them multi-core GPUs.
Intel does 474W with:
-1 Processor low end processor
-1 GPU
-2 banks 512MB of RAM.
AMD does 530W with:
-2 top of the line processors
-4 banks of 1GB of RAM
-4 PCIe 16x slots!
-12 SATA ports
-1 or (2 GPUs?) (not sure here)
-Great, great, great platform never seen on any desktop computer until now. A new era of computing!
Intel never released a Prescott CPU for mobile platforms. Ever heard of the Pentium M? :>
The processor you've chosen to compare the awful 4x4 with has been overclocked and had it's voltage increased so much that liquid cooling was necessary. It's also a Pentium D. It's also an ancient 90nm that Intel flogs for under $100. $100 vs the $1000 for your 4x4 processors. Yeesh. Go try again with the Core 2 Extreme QX6700. Faster and cheaper performance while using half the power.
P.S. It looks like AMD is ready to pull a Prescott of it's own:- The 65nm Athlon 64 X2s are slower than the old scrapheap 90nm. Pathetic. AMD may as well just give in now. It's 65nm does not work. 65nm slower than 90nm. 90nm cannot clock up higher because it's so old and junk; relegated for cheap budget processors only!
Go and enjoy your hot 4x4 computer; it'll put off enough heat to make living in Antarctica bareable! Meanwhile I'll just sit here with my cool running and FAST Core 2 Duo.
"On the desktop? You see virtualization as filling a MARKET need in the immediate/near future or do you mean maybe 2-3 years down the line?"
I knew there will be people who let off sh*ts on my short comment. I guess in your point of view many power users are not allowed to use a desktop?
I never say you should get virtualization support from Best Buy. Neither did AMD say 4x4 is suitable for the majority buying from Best Buy. However, the need for virtualization on some power desktops does exist, and the fact you don't know it only shows your ignorance. In fact, the type of people who'd benefit from C2Q are also those who'd likely to need virtualization.
A small business can virtualize its business desktops, so the employees can switch between Windows and Linux without hassle. A developer can virtualize his development and production environments to separate them from each other. A power user can virtualize his gaming and working/personal environments for security and other purposes.
"I guess in your point of view many power users are not allowed to use a desktop?"
No my point is this IS A NICHE MARKET! Do you really think 4x4 on desktop is the problem that AMD is trying to solve with 4x4? (I apologize in advance if you threw that out as only one of many possible gap/needs that 4x4 addressees)
Oh wait, i forgot the problem they are solving is MEGATASKING...
My point is 2P desktop may have some value 2-3 years down the line (maybe?); but for >99% of the desktop market it is meaningless. I would much rather have AMD focus on chip development, 65nm maturity, etc than wasting time trying to address <1%% of <30% of the overall CPU market...
"My point is 2P desktop may have some value 2-3 years down the line (maybe?); but for >99% of the desktop market it is meaningless."
Point taken, but disagreed. There is the problem of virtualization on desktop today, though not as much as on workstation and server (as you said, a niche market). Besides, what's the problem NASA trying to solve by landing on Mars? If there is value in a NASA project that looks forward 20-30 years into the future, there is surely some value in 4x4 which does 2-3 years.
The fact that the problem might be more prominent in the future doesn't mean it does not exist or worth addressing today.
"If there is value in a NASA project that looks forward 20-30 years into the future"
This statement implies there is value. (not sure I agree)
As for 4x4 on desktop - the stuff being "innovated" (for lack of a better word) could also be done on 2P server boards (with the exception of the graphics card layout aspects). It's not like 2P or virtualization will not be worked on if 4x4 isn't around.
Also, last I checked NASA wasn't trying to market and sell their exploration of Mars to the public consumer so I'm not sure you're analogy/metaphor is apt.
There is a difference between doing research vs. pawning off a product which doesn't address market needs under the guise of "future proofing". I believe AM2 was rolled out under the futureproof marketing scheme and I'm still quite happy (and not all that crippled) with a 939 chip/board. By the time it is no longer competitive I doubt AM2 will be the mainstream (AM2+, AM3, 4x4, 4x4+, 4x4++ are all on the AMD roadmap over the next 2 years) - as an example it will be physically compatible with K-late but with many of the innovations turned off or hamstrung.
BTW Intel does the same thing, so I'm not saying this is exclusively an AMD issue... I just don't understand why people defend this. As an example other than the niche video editing market not sure why people will use C2Q over C2D right now. The only reason this is slightly better (in my mind) than 4x4, is that Intel is not asking you to double your power consumption, start over with a new board, etc.. to make the transition to a marginally better product.
I think sharkymoron is gonna hit the spot like he did with 4x4 "making intel go BK"
Sharikou, stop your stupid religious fanboy babbling.
Intel Frags AMD now.
Intel will never BK.
K8 is not powerful enuff to compete with CORE2, K8 is dead.
Stop bullshitting yourself.
"Sampsa: Do you also expect to see dual core graphics solutions in the future in a similar manner as Intel’s and AMD’s dual core CPU’s?
Dave:”Graphics chips have been implementing multi-cores for years, and today we have 16 or more cores operating on one ASIC. Putting two GPUs on a die is not really a necessary step, but putting multiple GPUs on a card is a logical step going forward”."
Post a Comment
<< Home