Intel's front bus architecture in plain English
Nathan Brookwood, founder of research firm Insight 64, said the following about Intel's Core 2 quad: "It is as if you wanted to talk to your neighbour in your apartment building, and the only way to do it was to mail them a letter."
150 Comments:
And yet it's still faster than anything AMD offers at this time.
Imagine that.
Sharikou, where's the Beef?
Your latest post are really lacking substance and have resulted in a complete decline of debate on this Blog.
While it has been understood that the FSB is going to hurt Intel down the road, you continually bash a company, who, while still using the FSB, is still beating AMD in UP, DP, Mobile and Desktop.
Can you just not accept this, or is it that you don't believe it?
Are you hoping a few of the regular Pro AMD people will jump for joy, make some ridiculous post that Intel sucks and continue the decline of intelligent debate?
I really hope it changes after Computex, and that the release of Barcelona numbers can put things back into perspective or spark some good debate.
But yet IBM uses Intel in it's high performance grid systems.
www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=grid_and_utility_computing&articleId=9019681&taxonomyId=65&intsrc=kc_top
enumae, using the words intelligent debate and Sharikou in the same post is rather inane.
1.) He claims Intel will be bankrupt next year.
2.) He posts what have clearly been discovered to be faked "benchmarks" and fails to retract them.
3.) He posts "stories" based on one or two sentences in a review, to try to put a product in a positive light, when in fact the review itself was quite negative of the product.
This simply smacks of pure unprofessionalism.
Or fanboism...
Or just plain old lies.
Take your pick.
"It is as if you wanted to talk to your neighbour in your apartment building, and the only way to do it was to mail them a letter."
I don't think this is anywhere close to a good analogy. A FSB is an implicitly shared access medium, whereas letters are switched and routed. There is nothing inside a PC that works similar to a mailing system.
A better analogy of FSB connecting many cores is rather if you want to talk to your co-worker, the only way to do it is to write a note and pin it on the doors of every office (core) on the floor (board).
And as was said, it's funny how even with this method of bus architecture, Intel's chips are still faster than anything AMD offers...
In the end it doesn't matter whose interconnection is better. What matters is the overall performance. Yes, having better interconnection can help quite a bit but it isn't enough for AMD to beat Core2.
evil
"Take your pick."
Collecting clicks and advertising money.
As expected, you Intelers unable to comprehend the ideas behind forward thinking. Barcelona is about to hit. When it does, you will all see why this primitive attempt at a FSB will push Intel even further behind than it already is. Please get a clue kids.
First freely downloadable DX10 game out. LegitReviews (The Doctor's linked site) uses it to review AMD and Nvidia cards.
AMD, even though the visulas are completely broken, get creamed by Nvidia. This was with AMD provided, non-released drivers.
By how much?
Over 200%
http://www.legitreviews.com/article/505/3/
Penix, try having some wits before you try matching them.
A clue that Intel using a antiquated connection system and a modified Pentium 3 arch is kicking the ever living crap out of AMD? So much so that they continue to hide true performance numbers, push back release dates, slap NDAs on the press and just generally act so sketchy it makes most sane enthusiasts wonder exactly how much of the K10 myth is based on reality?
Barcelona is about to hit, but unless it breaks at least 30% performance improvement rate over Clovertown and 40 - 45% over the Penryn series cores, it will be considered a failure, never mind that Intel will do the same with Barcelona as they did with K8 back in the days of Netburst. Even if hey don't have the performance lead, they have:
1) The capital to ride out the storm.
2) The ability to out sell AMD, even though Green has broken into traditional Intel only markets..
3) A considerably, almost obscenely large time lead over AMD.
AMD is in sore straight, but make no mistake, they won't bankrupt. But similarly, Intel won't even come close to the mention of bankruptcy unless it comes from this blog and no one's going to take that seriously.
I like AMD. I own AMD PC. I plan to buy an AMD laptop beacuse it is the best value. I also know that all Microsoft demos are done using AMD only.
Having said that, Sharikou, I must add that perception is reality unless you are a visionary. Intel gets good (superior?) bench mark using huge cache and is selling its CPUs at a good margin. Until AMD can beat that (and make some profit), just shut up.
dasickninja blabbered...
...unless it breaks at least 30% performance improvement rate over Clovertown and 40 - 45% over the Penryn series cores, it will be considered a failure...
A failure to who? An obsessively blind fanboy such as yourself? That kind of logic would only make sense to a complete idiot. If Barcelona is faster by even 1%, that puts the big bad Intel back in 2nd place behind the tiny AMD. That is like Mike Tyson getting his ass handed to him by Mighty Mouse.
“ There is nothing inside a PC that works similar to a mailing system.”
All data in memory lives at an address and so do you.
Intel-ers don’t let Dr S. jerk your chain; everyone already knows FSB is not in your future.
I also know that all Microsoft demos are done using AMD only.
Actually this changed with the recently, now both Intel and AMD based systems are used.
PENIX said...
As expected, you Intelers unable to comprehend the ideas behind forward thinking. Barcelona is about to hit. When it does, you will all see why this primitive attempt at a FSB will push Intel even further behind than it already is. Please get a clue kids.
Pot calling Kettle.
Yes Kettle?
BLACK
Intel is currently top dog right now. I don't know where you are getting your numbers from, but even Intel's middle of the road Woodcrests are only a few percentage points behind AMD's top of the line processor... at a much lower cost. MUCH lower.
So mind telling me just how AMD is kicking Intel's ass?
Penix, if Barcelona isn't considerably faster than Clovertown, never mind Harpertown, it will be considered a failure because it took so damn long to come out and only gives a small jump on performance. And chances are, when that happens, Intel will just crank up the clock speed to tighten the gap. The key here is perception, which you seem to lack.
Congrats Sharikou PhD you again prove beyond any doubt you do deserve that "Ph"ony "D"octorate title you tack to your signature. You are a disgrace all those others that work hard and are worthy of that title.
Is that all you got? Is to take silly snipets from others to post your blog?
AMD has a dog of a graphics card .. whats your execuse for this?
AMD has to resort to continued pricing discounts that are costing them hundreds of million a quarter.. whats your explanation for this?
AMD still has no credible Barcebalogna benchmarks.. whats your explanation for this?
AMD has not enough capacity to achieve critical cost/profit to sustain 45nm and multiple design initiatives.. whats your explanation for this.
AMD's IMC and Hypertransport is stunning for server but simply a huge competitive boat anchor for the dual-core and consumer/mobile space.. whats your explanation for this?
INTEL on the other hand has Penrym on 45nm ready this year and Nehalem a quadcore with IMC next year. On 45nm it will be far cheaper and faster then anything AMD will have thru 2008 and most of 2009. AMD couldn't make money being way ahead for 6 months 2006, they have no hope in 2007 being way behind for most of the year and are on track for a billion dollar loss year, and for 2008 its equally bleak.
Whats the point of your silly blog anyways? For the past 4 years you could get away making jokes about INTEL, but today it only continues to confirm your "Ph"ony e"D"ucation. Get over it and move on to something more relevant.
Give you this much, each post is worthy a high entertainment for sillyness now.
. AMD couldn't make money being way ahead for 6 months 2006
AMD made a modest profit for the first three quarters of '06. By the fourth quarter AMD had Ati acquisition charges and the price war had taken it's toll on AMD, thus leading them to post a large loss.
Why bottlenecks are dangerous and warnings should be in place:
http://theync.com/m022607fire.shtml
Barcelona is delayed again:
http://www.digitimes.com/systems/a20070516PD217.html
2900XT Crossfire benchmarks are up.
Gotta love that negative scaling.
Two AMD $400 video cards perform slower than a single $320 Nvidia video card.
DaSickNinja said...
Penix, if Barcelona isn't considerably faster than Clovertown, never mind Harpertown, it will be considered a failure because it took so damn long to come out and only gives a small jump on performance.
I couldn't disagree with you more. Barcelona is only a failure if AMD is unable to match Intel on both performace and price to performance ratio. But this speculation is irrelevant since Barcelona will surely take Intel on both those points by a significant margin.
Randy Allen said...
Barcelona is delayed again:
http://www.digitimes.com/systems/a20070516PD217.html
I'm glad that AMD cares enough about it's customers that they wait until a product is truely ready before releasing it. Intel, on the other hand, is well known for releasing half baked pieces of crap. AMD is obviously not desparate to release Barcelona, because they know that even Intel's next generation Penryn will be no match.
Penix said:
I'm glad that AMD cares enough about it's customers that they wait until a product is truely ready before releasing it.
O'rly? You mean like the 2900XT????
You are going to hurt yourself you keep saying stuff like that Penis.
Evil said...
O'rly? You mean like the 2900XT????
I think the 2900XT is a fantastic card, and I'm sure it's performance will get even better once the drivers mature.
http://www.dailytech.com/article.aspx?newsid=7043
But the R600 series is irrelevant because this is not even AMD's first GPU offering. This is scraps left over from ATI before the purchase. When AMD's real GPU hits the market, there will be no room for argument or doubt.
Penis said:
I think the 2900XT is a fantastic card, and I'm sure it's performance will get even better once the drivers mature.
In otherwords, AMD released something not ready for prime time.
Hell in AMD's SLI mode running TWO of the cards still doesn't give the performance of ONE NVidia card.
Evil said...
In otherwords, AMD released something not ready for prime time.
Hell in AMD's SLI mode running TWO of the cards still doesn't give the performance of ONE NVidia card.
No, it is stable and ready. AMD has a higher standard, unlike Intel, who has had several recalls for defective products.
I cannot say what the problem is with SLI/Crossfire. Both ATI and Nvidia's multi card offerings are unimpressive.
Penis said:
No, it is stable and ready. AMD has a higher standard, unlike Intel, who has had several recalls for defective products.
Ahem: http://www.crn.com/white-box/187001876
AMD recalls Opetrons.
Anyways, if the 2900XT was stable and ready, why did it crash for a lot of reviewers? Why does it suck down so much power? Why does it pump out so much heat?
You bitch and moan when an Intel processor uses 5 more watts than an AMD processor, or bitch when it puts out a handful more BTU, but when AMD makes the SAME mistakes with the 2900XT its A OK?
WTF?
Is your mind really that brainwashed by Ph(ake)d?
Evil said...
Ahem: http://www.crn.com/white-box/187001876
AMD recalls Opetrons.
You are committing suicide trying to pull that card. Here are 5 Intel recalls, and a known data corruption bug they didn't even care enough to do anything about.
Intel Pentium FDIV Recall
http://www.intel.com/support/processors/pentium/fdiv/
Intel Recalls 1.133GHz Chips
http://www.geek.com/news/geeknews/q22000/chi2000829002238.htm
Intel i820 MTH Recall
http://www.computerwriter.com/Star/2000/jun/cw062100_intel_i820_mth_recall.htm
Intel i850 Data Corruption Bug
http://www.activewin.com/reviews/hardware/processors/intel/p415ghz/cpuarch.shtml
Intel recalls 900MHz PIII Xeon
http://www.geek.com/news/geeknews/2001july/chi20010711006751.htm
Intel Recalls Pentium 4 915 and 925 Chipsets
http://www.theregister.co.uk/2004/06/25/intel_xeon_recall/
penix
"Barcelona is about to hit. When it does, you will all see why this primitive attempt at a FSB will push Intel even further behind than it already is"
How exactly would barcelona be better on 1P desktops? They also share only two memory channels for those fore cores, you know.
"But this speculation is irrelevant since Barcelona will surely take Intel on both those points by a significant margin"
Being as knowledgeable as you are, what are the Barcelona prices?
"I think the 2900XT is a fantastic card, and I'm sure it's performance will get even better once the drivers mature."
So you agree that AMD couldn't make their drivers work even though they had over half a year more time to work on them than nvidia.
"But the R600 series is irrelevant because this is not even AMD's first GPU offering"
Entire marketing and launching thing is done by AMD, only the design and development was done by ATI. Did you know that a few weeks ago AMD was talking about hard launching R600 at all price points at the same time? Also did you know that R600 is advertised as one of the best performance per watt series? Too bad it only goes for the low-end. Also AMD was always saying they would deliver R600 with perfectly working DX10 drivers and NV doesn't have them. So far it seems as ATI doesn't have correctly working drivers for any DX version.
"When AMD's real GPU hits the market, there will be no room for argument or doubt."
When will this happen?
What kind of GPUs will they create?
As GPU design takes quite a lot of time it should be that the next one we see will still be of ATI origin and be released during the next year or so.
"No, it is stable and ready"
Prescott of the GPUs with nonexistent driver quality is considered ready? Sure, for packaging maybe but not for end-users who believed what AMD was saying just a little while ago.
"I cannot say what the problem is with SLI/Crossfire. Both ATI and Nvidia's multi card offerings are unimpressive."
I agree that both have problems but from what I've seen so far CF is not working nearly as well as SLI.
evil
"Is your mind really that brainwashed by Ph(ake)d? "
I think he did it all by himself. You know, when you keep on repeating something you will eventually start believing in it.
penix
"Here are 5 Intel recalls, and a known data corruption bug they didn't even care enough to do anything about."
Anything from last couple of years?
Suicide my ass. You said and I quote:
"I'm glad that AMD cares enough about it's customers that they wait until a product is truely ready before releasing it"
if that were true there would be NO recalls.
Again, you can't have it both ways. AMD did recall chips and issued a rather major firmware update for CPU erratum in both 2004 and 2006...
How do you explain the high power consumption, high BTU output and underperformance of the 2900 if it was really ready?
Fanboi
Ho Ho said...
How exactly would barcelona be better on 1P desktops?
Barcelona will offer better performance, clock for clock per core. The performance gap will be much more noticeable as the core count increases and the Intel FSB bottleneck becomes apparent. The industry is quickly transitioning to multicore. 4P+ will be considered midrange by next year.
Ho Ho said...
Being as knowledgeable as you are, what are the Barcelona prices?
The prices will depend on Intel's pricing at the time and the performance advantage of Barcelona. Sharikou is more knowledgable than myself in these matters.
Ho Ho said...
So you agree that AMD couldn't make their drivers work even though they had over half a year more time to work on them than nvidia.
Also AMD was always saying they would deliver R600 with perfectly working DX10 drivers and NV doesn't have them. So far it seems as ATI doesn't have correctly working drivers for any DX version.
We do not know how much time AMD had to work on the drivers after the final version of the R600 was completed. We also do not know what difficulties AMD may be encountering due to the unfortunate buggy state of Windows Vista. I expect all these issues to be ironed out after the release of Vista SP1, which is due in 2H07.
Ho Ho said...
"When AMD's real GPU hits the market, there will be no room for argument or doubt."
When will this happen? What kind of GPUs will they create?
As GPU design takes quite a lot of time it should be that the next one we see will still be of ATI origin and be released during the next year or so.
You are correct. The next chip, the R700 will be primarily based on the R600, but it is much more of a collaborative effort and will be much more modular and scalable. I do not expect a huge performance boost until next year with the debut of AMD Fusion.
Ho Ho said...
Prescott of the GPUs with nonexistent driver quality is considered ready? Sure, for packaging maybe but not for end-users who believed what AMD was saying just a little while ago.
How cruel of you to compare AMD standards to that of Intel!
Ho Ho said...
"I cannot say what the problem is with SLI/Crossfire. Both ATI and Nvidia's multi card offerings are unimpressive."
I agree that both have problems but from what I've seen so far CF is not working nearly as well as SLI.
The more modular architecture of the R700 should help to remedy the problems that exist with both CF and SLI.
Evil said...
if that were true there would be NO recalls.
I make no excuses for AMD in this case. As limited as the Opteron recall was, it should not have happened. But rest assured, you will not see this mistake from AMD again. Intel, on the other hand, simply does not care. This is why they push out faulty products, one after the other.
Ho Ho said...
penix
"Here are 5 Intel recalls, and a known data corruption bug they didn't even care enough to do anything about."
Anything from last couple of years?
Here is the Core 2 Errata published by Intel themselves. Intel acknowledged, "AI 39, could lead to serious data or system corruption".
Penix
I couldn't disagree with you more. Barcelona is only a failure if AMD is unable to match Intel on both performace and price to performance ratio. But this speculation is irrelevant since Barcelona will surely take Intel on both those points by a significant margin.
Do you have anything even remotely resembling proof of this? Links to Fudzilla or to AMD's slideshows don't count.
Um, Penis, are you saying that AMD's chips don't have erratum? Even ones that can cause data loss or other serious issues?
THINK very carefully before you open your fanboi trap...
So why are you slagging Intel for the C2D erratum where AMD has just as many erratum in it's latest gen chips (actually looks like 12 more than the C2D erratum). One look at Sun's releases for Solaris running under AMD's chips shows quite a few work arounds they have had to implement to avoid AMD's erratum.
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25759.pdf
HOLY SHIT are you one blinded fanboi.
Once again, AMD can have pages and pages of erratum, some which can cause serious issues like prohibiting use of PCIExpress slots, or just plain crashing systems... but it's not OK for Intel to have the same?
HOLY SHIT fanboi.
Buggy state of Windows Vista? Funny, my Nvidia card running with an AMD processor is working just fine right now. No issues at all. Heck its running an AMD optimized Direct X 10 game BETTER than the AMD card could! Holy shit fanboi!
"Anyways, if the 2900XT was stable and ready, why did it crash for a lot of reviewers?"
under what conditions?
"Why does it suck down so much power?""
it takes less power under load, but more when idle.
"Why does it pump out so much heat?"
one wouldn't repeat a thing twice unless he's ignorant enough to think they are different.
Ho Ho -
"How exactly would barcelona be better on 1P desktops? They also share only two memory channels for those fore cores, you know."
K8 already matches Core 2 on SPEC at quad-core setup (slightly behind on integer and slightly ahead on fp). One quad-core Barcelona will beat two dual-core K8.
Intel's FSB is not only shared by memory accesses but also by coherence protocol and IO. When arbitration is needed on a shared medium, usable bandwidth becomes much less than the max. For dynamic allocation it's about 30%-70% depending on how well the protocol runs. In short, the Intel FSB is a very poor design for multi-core.
penix
"Barcelona will offer better performance, clock for clock per core"
How do we know that? From those images specially drawn for Fudzilla or AMD touting XX% better perfomance on some specific (unknown) application with unknown number of sockets and CPUs?
"The performance gap will be much more noticeable as the core count increases and the Intel FSB bottleneck becomes apparent"
Are you talking about real-world situation (the applications that majority of people are using) or benchmarks measuring some small part of a complete system, such as core-to-core bandwidth (specfp comes to my mind first)?
"The industry is quickly transitioning to multicore"
I know, Intel sells a lot of quadcore Xeons already.
"4P+ will be considered midrange by next year"
Why? When you could do just fine with 2P dualcore for the last couple of years why should a 4P 16 core not be enough? Also it might be surprising to you but having a load balanced 2P boxes instead of one big 4P is cheaper and works perfectly fine for most things.
I also know from personal experience that a 1.8GHz 2P single core Opteron box is just fine to run a CPU intesive Oracle APEX based application that serves around 300 simultaneous users at peak times. How many companies would need 25+ times more power in their servers?
The prices will depend on Intel's pricing at the time and the performance advantage of Barcelona""
What makes you think that AMD with its awful business model would make its CPUs cheaper than Intel ones at same performance level, belief? Comparing current prices things do not look that good for AMD.
"Sharikou is more knowledgable than myself in these matters."
Is this a joke or you honestly believe it?
"We do not know how much time AMD had to work on the drivers after the final version of the R600 was completed"
You know you can develop drivers with prerelease version of GPU also, you don't need the final release for that. NV released its G80 drivers over half a year ago and it was relatively nice. At that time R600 was still having a few bugs but nothing too major and driver developement was started long time before that. Around 1.5 years ago it was though that R600 and G80 will hit the street at about the same time. Before R600 hit AMD was badmouthing NV because of their bad Vista driver quality and promised to release R600 with perfectly working drivers. Guess who failed to deliver its promises (again).
"We also do not know what difficulties AMD may be encountering due to the unfortunate buggy state of Windows Vista"
I don't really care much about Vista, R600 doesn't work that nicely even under the ancient XP. Also what you said goes for NV too.
"The next chip, the R700 will be primarily based on the R600, but it is much more of a collaborative effort and will be much more modular and scalable"
It is so because they said it would or because inquirer said so? NV also said that G80 would not be using unified shaders and they said it about a year ago.
"I do not expect a huge performance boost until next year with the debut of AMD Fusion."
What kind of a boost do you expect when you bridge low-end GPU and CPU together and provide them with a tiny fraction of high-end GPU memory bandwidth?
"How cruel of you to compare AMD standards to that of Intel!"
Life is cruel but honest.
"The more modular architecture of the R700 should help to remedy the problems that exist with both CF and SLI."
Sorry but dividing rasterizing over several memory banks doesn't work that easily. They could probably help things a bit by doing what 3DFX did for its multi VPU video cards: duplicate everything for every VPU on PCB. That meant that Voodoo 5500 with 4 RS100's had 128MiB of total RAM on board but only 32MiB of it was really usable.
Though back then you could do alternate scanline rendering and didn't have to move data between the chips mid-frame. Today with lots of off-screen rendering things are considerably more complex and a lot less scalable. This is the very reason why SLI and CF scale so badly in many games, it cannot be simply fixed because it is an algorithm (rasterizing) problem, not HW problem.
Though Intel is doing a lot of research on ray tracing and there you do not have such scaling problems. It has been shown you can connect over a hundred 2P boxes over 100Mbit LAN and get >50FPS with huge models and insane details, there is no need for superfast buses and high memory bandwidth to deliver comparable results to GPUs. You only need special hardware to do it extremely efficiently.
E.g a 65MHz single-pipeline prototype fully programmable ray tracer chip is faster than 2.6GHz P4 using SSE2* and it uses ~25x less floating point power and >50x less memory bandwidth than FX5900 Ultra. With R600 transistor budget you can make around hundred of those pipelines and when clocked at >50MHz you would only need around 25-50% of R600's bandwidth to deliver far superior results.
If you are more interested then just google for RPU or ray processing unit. You can also read PhD. thesis on the subject, it has lots of information about the RT hardware there.
"But rest assured, you will not see this mistake from AMD again"
Only way this could be true was if they stopped existing one day. That will not happen but there will be recalls in the future for sure.
"Here is the Core 2 Errata published by Intel themselves"
What does those errats have got to do with anything? I didn't ask about erratas, I asked about recalls. Btw, have you read the errata of later K8s?>
abinstein
"under what conditions?"
IIRC techreport was having severe problems with CF. I'm not sure if it was their 700W(?) PSU that just couldn't handle it or wasn't the cooling enough.
"K8 already matches Core 2 on SPEC at quad-core setup (slightly behind on integer and slightly ahead on fp)"
There is quadcore K8? Where can I see it?
Do you know that 2P DDR2 K8 has roughly half the memory bandwidth of 1P K10? How can you compare 2P K8 to 1P K10?
Also what kind of real-world application does thay SPEC benchmark represent?
"In short, the Intel FSB is a very poor design for multi-core"
I know there are far better things than FSB but the fact is that it works just fine today.
DaSickNinja said...
Do you have anything even remotely resembling proof of this? Links to Fudzilla or to AMD's slideshows don't count.
Of course I can't if you say I cannot use AMD's numbers. BARCELONA ISN'T RELEASED YET, GENIUS!
Intel fanboy Evil said...
Once again, AMD can have pages and pages of erratum, some which can cause serious issues like prohibiting use of PCIExpress slots, or just plain crashing systems... but it's not OK for Intel to have the same?
Backup your claims, fanboy.
Intel fanboy Evil said...
Buggy state of Windows Vista? Funny, my Nvidia card running with an AMD processor is working just fine right now. No issues at all. Heck its running an AMD optimized Direct X 10 game BETTER than the AMD card could! Holy shit fanboi!
Yes, Windows Vista is buggy. You are a complete fool to think otherwise. TomsHardware has come up with results which are the exact opposite of yours. Their May 14th review says that Nvidia system "rebooted for no apparent reason from the desktop twice and six times in different applications". ATI's drivers are golden compared to Nvidia's profound pile of shit.
evil -
"Hell in AMD's SLI mode running TWO of the cards still doesn't give the performance of ONE NVidia card."
You're going for the best FUDer in town, aren't you? :p Just because one lousy incompetent reviewer came up with that (most likely invalid) result doesn't mean it's true - yet for all your fanboism you repeat it again and again to ultimate shamelessness.
Crossfire vs. SLI - FEAR
Crossfire vs. SLI - Company of Heros
Ho Ho -
"IIRC techreport was having severe problems with CF. I'm not sure if it was their 700W(?) PSU that just couldn't handle it or wasn't the cooling enough."
What do you mean "IIRC"? What do you mean "I'm not sure..."? You are plain FUDing with these words, period.
Take a look at a better-measured AMD R600 power consumption. They describe the settings; they show the pictures; you can totally repeat what they did and repeat the results. AMD 2900 XT actually consumes less power than nVidia 8800 GTS both when idle and under load.
If you worry about power, then just don't go SLI nor CF. Neither saves your electricity bill or reduce heats from your PC.
"[TomsHardware] May 14th review says that Nvidia system "rebooted for no apparent reason from the desktop twice and six times in different applications"."
Nice to see evidence destroying shameless FUD. I actually like this part of the review best:
Other than the system rebooting for the fun of it in an application, the best was waiting for two to three minutes while 3DMark05 tried to calculate a score or the load times between tests.
I guess it's really nice to reach 10% higher fps if you'd just wait for 3 minutes, NO? :D
Ho Ho said...
"The performance gap will be much more noticeable as the core count increases and the Intel FSB bottleneck becomes apparent"
Are you talking about real-world situation (the applications that majority of people are using) or benchmarks measuring some small part of a complete system, such as core-to-core bandwidth (specfp comes to my mind first)?
Read world. As more and more software begins to utilize multicore, and the core count increases,the bottlenecks will become more apparent.
Ho Ho said...
How many companies would need 25+ times more power in their servers?
Who wouldn't? I'd take a server 10,000x more powerful if I could. As technology moves forward, prices drop and today's cutting edge becomes tomorrow's low end. Dual core/dual socket are midrange today. Quad core/quad socket will be midrange tomorrow.
Ho Ho said...
I didn't ask about erratas, I asked about recalls.
Recall occurs because the public becomes aware of the problem. There was no AI39 recall because Intel was successful in covering it up.
Ho Ho -
"... or benchmarks measuring some small part of a complete system, such as core-to-core bandwidth (specfp comes to my mind first)?"
I have to say you're an ignorant fool. SPECfp does not measure core-to-core bandwidth. SPECfp consists of single-threaded programs, where SPECfp_rate is measured by running multiple independent instances of the benchmark. There is no real core-to-core data sharing.
You should take a look at the programs in the SPEC 2006 suite and tell us which one is not real-world application. Just tell us with concrete analysis, not some rough and often wrong conjectures.
You may argue the averaging of all programs in SPEC do not reflect your computing needs, and you could prefer a weighted average (with good reasoning to back it up). However, if you don't believe the validness of SPEC, then you could well trash all xxxMark that those enthusiast websites use, together with code snippets from games as well.
"The performance gap will be much more noticeable as the core count increases and the Intel FSB bottleneck becomes apparent"
The lack of scalability of Intel's Core 2 MCM/FSB is already apparent at quad-core.
With Core 2 Quad (and above), you only get 60% of the additional cores that you paid for...
Dang, I can't understand what is wrong with linking. When previewing things work perfectly but after posting everything gets screwed up. Don't blame me, blame this broken blog software.
penix
"BARCELONA ISN'T RELEASED YET, GENIUS!"
So it is. Though I still doesn't understand how do you know what you stated before, considering that it isn't released as you said.
"Backup your claims, fanboy."
I really suggest you to lear using google: start from page13
"ATI's drivers are golden compared to Nvidia's profound pile of shit."
Thay are? Then why do they show such an awful performance?
abinstein
"What do you mean "IIRC"? What do you mean "I'm not sure..."? "
I meant exactly that: IIRC. It seems that I didn't remember correctly, it wasn't techreport who was having the trouble I described, they had different kind of trouble, a 700W PSU was not enough for CF and they had to use 1kW one. Unfortunately I can't remember the one I really meant right now, I've read too many reviews.
"AMD 2900 XT actually consumes less power than nVidia 8800 GTS both when idle and under load"
Have you even read the charts on your link? Isn't it interesting that with 8800GTX it adds around 128W to the power usage wheras with XT it adds 216W. Kind of makes me doubt in their measurements.
Anyway, thanks for proving the point I made about power consumption.
penix
"Who wouldn't?"
Around 99% of all companies. You know, that part of the market where money is earned. Not the part where you compare who has bigger maximum performance achievable using maximum amount of hardware.
"I'd take a server 10,000x more powerful if I could"
I'd take such for myself too, even though I wouldn't need it. Most companies are smart and do not waste money. Though of cource there are exceptions to that rule.
"Recall occurs because the public becomes aware of the problem"
No, they happen because public can't use the products correctly because of the problems.
"There was no AI39 recall because Intel was successful in covering it up."
Are you saying that there were problems but everyone were simply silenced? How many problems have been caused by that bug in real world? Or was that fixed soon and not many buggy CPUs were delivered?
abinstein
"You should take a look at the programs in the SPEC 2006 suite and tell us which one is not real-world application"
I wasn't asking what doesn't represent a real-world application, I was asking what specific applications they represent that are being run by majority of people. Looking at the SpecFP it seems as no ordinary human needs any FP calculations. Quantum mechanics, fluid dynamics and weather forecasting is not exactly the thing that most people do from day to day.
I do not argue that they represent what numeric computation people run, I just try to say that for that part of the market that makes up the vast majority of customers those are not exactly day-to-day applications.
Also I'd like if SPEC would do some real database and web server benchmarks also. What they have is nice but missing some key features. Though developing a decent framework for that would probably be way more difficult than all other Spec benchmarks combined.
"often wrong conjectures"
I'm not saying I'm 100% correct every time but hearing that from you made me laugh out loud, especially considering over what we have discussed before (RHT, IPC and several others) . Hearing this and that 2P K8 equals quadcore K8 just made my day.
"However, if you don't believe the validness of SPEC, then you could well trash all xxxMark that those enthusiast websites use, together with code snippets from games as well"
Well, if not considering xxxMark then the other applications are real-world ones that are used daly by the bigger part of the market and they are not some synthetic benchmarks as spec ones are.
Btw, you forgot to comment on quadcore K8 and 2P K8 vs 1P K10, I'd be really interested hearing about both.
Penis, learn to read, I posted the freaking link right to AMD's current erratum page.
If you can't figure out how to click on a link, or don't have Acrobat installed... well I guess there really is no hope for you.
I also love how you are so willing to compare a yet unreleased AMD chip (one that won't be out to late 2H of 2007) to Intel's current generation chip. Why don't you compare to the Intel chip due at the same time...
Oh yeah, cause you'll look like a fool.
How many reviews of the 2900 would you like me to post complaining of visual artifacts, games not running, benchmarks crashing and systems just rebooting?
I know of plenty.
Of course being the fanboi you are (you too abinstein), all you do is stick your fingers in your eyes and pretend they don't exist.
All this complaining about C2D, and yet it STILL outperforms the AMD line. In some areas a HUGE margin...
Whine all you want fanbois, but AMD is till 2nd fiddle to Intel, and it's not looking like thats going to change any time soon as Barcelona was yet again delayed.
One thing I forgot to mention:
penix
"Barcelona will offer better performance, clock for clock per core"
What will AMD do after the 22'th July price drop? After that x2 6000+ will be fighting against $163 e6550. It used to compete against e6600, a $316 chip. Does that mean AMD will have to lower its prices 2x across the board to stay competitive with Intel? As was said K10 is pushed back and it will not have significant volume for quite some time. Assuming AMD gets its financial situation more or less fixed I expect K10 to start having any impact at around H2 08, perhaps later.
abinstein
Two things you have said stand out.
1. AMD 2900 XT actually consumes less power than nVidia 8800 GTS both when idle and under load.
While your statement is correct when looking at idle, it is not correct when looking at load.
Please note that when looking at the results for the ATI HD 2900XT, that the difference between idle and load on the 680i is about 136W, the difference between the idle and load for the 8800GTS-640MB on the 680i is about 63W.
So that 8W advantage when idle is really not significant.
2. With Core 2 Quad (and above), you only get 60% of the additional cores that you paid for...
While it is understood that Intels FSB is reaching the end of its limits, you left out price.
Going from 2 cores per socket to 4 cores per socket in a DP system, you actually pay 40% more for a gain of 60% in performance.
Currently AMD can not do that, if you want 8 cores and the performance that comes with it you pay heavily for it, and because of this Intel is ahead of AMD on price/performance.
While it is understood that Intels FSB is reaching the end of its limits, you left out price.
Which is why the FSB will be phased out starting next year. It's ridiculous how AMD and Sharikou keep talking about the "FSB Bottleneck" when these processors are still fragging AMD all over the place. I think AMD talks more about the "FSB Bottleneck" and the perils of using an MCM over a "native" solution for quad core CPUs than they do about the benefits of their own CPUs!
Yes, the FSB is old, it's reaching the end of the line and is due to be replaced by CSI next year. What's wrong with that?
I remind everyone that Intel has a significant lead in all market segments:- Mobile, Desktop and Servers. The only place AMD has any hope left is in MP servers where Intel is still using Netburst.
Penix
BARCELONA ISN'T RELEASED YET, GENIUS!
Yet, you have no trouble proclaiming the superiority of the K10 arch. If it isn't released yet, how the fuck do you know that its better? We've got nothing but a bunch of questionable tests and benches to go with.
Ho Ho said...
I really suggest you to lear using google: start from page13
Evil said...
Penis, learn to read, I posted the freaking link right to AMD's current erratum page.
Idiots, I know there is a link to the errata. When I presented my argument, I presented the link, the ID number of a serious issue and Intel's acknowledgement of it as data corruption problem. If you are unwilling to do the homework behind the argument, then you're not worth taking seriously.
Ho Ho blurted...
penix
"BARCELONA ISN'T RELEASED YET, GENIUS!"
So it is. Though I still doesn't understand how do you know what you stated before, considering that it isn't released as you said.
Try reading the complete conversation. He asked for numbers but said I cannot use numbers published by AMD. Where else am I going to get numbers on an UNRELEASED CHIP other than the only company that has it?
Ho Ho blurted...
"ATI's drivers are golden compared to Nvidia's profound pile of shit."
Thay are? Then why do they show such an awful performance?
The R600 drivers may not be perfect, but at least it doesn't crash rampantly like the Nvidia drivers. The AMD R600 is within a few percent of the Nvidia 8800GTX, and even bests it several times. That is hardly an "awful performance".
Ho Ho blurted...
penix
"Who wouldn't? I'd take a server 10,000x more powerful if I could."
Around 99% of all companies.
Most companies are smart and do not waste money.
For the love of God Ho Ho, read the whole God damn thing before you come to such an absurd conclusion. The next sentence is "As technology moves forward, prices drop and today's cutting edge becomes tomorrow's low end.".
Ho Ho said...
No, they [recalls] happen because public can't use the products correctly because of the problems.
Remember the original Pentium FDIV bug that led to a recall? That was a less severe bug than AI39. Why was it recalled? Because the media picked it up and made it sound like a terrible problem.
Evil said...
How many reviews of the 2900 would you like me to post complaining of visual artifacts, games not running, benchmarks crashing and systems just rebooting?
The same exists for Nvidia, but from what I read, it's significantly worse. My judgments are completely without bias. Vista is known to be infested with bugs and problems. I presented my argument in a way to give the benefit of the doubt that many of the problems are due to this. This would be for both Nvidia and AMD/ATi. You, being a complete and utter Nvidia fanboy, insist on demonizing the competition and crying relentlessly.
enumae said...
So that 8W advantage when idle is really not significant.
Isn't desktop usage considered, more or less, "idle"?
The same exists for Nvidia, but from what I read, it's significantly worse. My judgments are completely without bias. Vista is known to be infested with bugs and problems. I presented my argument in a way to give the benefit of the doubt that many of the problems are due to this. This would be for both Nvidia and AMD/ATi. You, being a complete and utter Nvidia fanboy, insist on demonizing the competition and crying relentlessly.
Without bias my ass.
Your mouth is so firmly attached to Ph(ake)d's ass you could be conjoined twins.
If you took all of 5 minutes to READ (I know it's hard for you), you would have seen not one, not two, but FIVE erratum that can lead to serious data loss. But nah, lets just gloss over it, as you wouldn't want to break that precious fanboism in AMD.
Funny, several major reviews posted right on this very blog mentioned that the AMD drivers would mysteriously crash applications back to the desktop... but nah, lets just ignore those as well.
Holy shit! talk about double talk.
Are you sure you ain't somewhat damaged?
Poor little baby evil said...
whAAAAAAAAAAAAAAAAAAAAAAAAAAAA!!!
Incredible. You didn't even miss a single beat.
Penix
Isn't desktop usage considered, more or less, "idle"?
Idle vs load times, look at it this way...
HD2900XT uses 73W more under load, so for every hour you use the HD2900XT the 8800GTS-640MB could sit idle nine hours.
Still think that 8 Watts is enough to be a factor?
enumae said...
Still think that 8 Watts is enough to be a factor?
For me personally, yes. My computer is on the desktop about 99% of the time. I am not the type of person whom they are marketing this card for.
PENIX said...
enumae said...
Still think that 8 Watts is enough to be a factor?
For me personally, yes. My computer is on the desktop about 99% of the time. I am not the type of person whom they are marketing this card for
cool, you only spend less than 15 minutes a day with your computer, and i believe you must have used it all up just for this blog.
You don't seems to need heavy 3D capability, why bother buying a discrete 3D card? I guess the intel IGP is the nicest fit for you :) it'd save you a lot of electricity and it is definitely more than 8 Watts! :)
Penis,
Is all your capable of exist as fanboism, blind fanaticism and idiocy?
Me thinks so.
Must suck to be you... your bestest company, the one you put all the posters up in your bedroom for (you know, at mom's house, we all know you ain't allowed to live all on your own), is behind Intel in the CPU world, and behind Nvidia in the GPU world, and all you can do is whine on Ph(ake)d's blog.
You go boy!
Penix
I am not the type of person whom they are marketing this card for.
Penix, if what you just said is remotely true, then you are not the type of person who should be making comments relating to this video card!!!
I hope you understand that.
I do not expect a huge performance boost until next year with the debut of AMD Fusion.
Fusion is 2009 for mobile only. Unless of course you don't beleive AMD's own presentations.
http://images.dailytech.com/nimage/4771_large_amd_foil.png
Little baby evil said...
NO YOU ARE!!! whAAAAAAAAAAAAAAAAA!!!
pointer said...
cool, you only spend less than 15 minutes a day with your computer, and i believe you must have used it all up just for this blog.
enumae said...
Penix, if what you just said is remotely true, then you are not the type of person who should be making comments relating to this video card!!!
Since my computer is always on, 1% means 7.2 hours of intensive 3D per month. (Which is a bit high for me most months.) The rest of the time is used for normal desktop applications (idle). This is not much time, but when I do fire up Quake or Doom, I don't want to deal with low frame rates, so an integrated option is not sufficient.
I will add to the statement. I am not the type of person whom they are marketing this card for.... yet. I am not a gaymer, and I do not spend much time playing 3D games, but I do want to have descent 3D capability when I need it. When the price of this card comes down far enough, I will be the type of person who will purchase it.
Bubba said...
Fusion is 2009 for mobile only. Unless of course you don't beleive AMD's own presentations.
I will clarify. While it's true that it will debut on mobile first, it will make it into desktop soon after. With the CPU and GPU side by side, they will not have to use the bus for communications. This will yield a huge performance boost compared to other GPUs in its class. Shortly after, AMD will use this same technology in its main GPU line.
it will make it into desktop soon after. With the CPU and GPU side by side, they will not have to use the bus for communications. This will yield a huge performance boost compared to other GPUs in its class. Shortly after, AMD will use this same technology in its main GPU line.
Link?
And don't embarass yourself. Link to an AMD source like I did.
Oh, and do want to clarify your date also?
Fussion is not next year.
Penix
The rest of the time is used for normal desktop applications (idle).
Penix, idle means idle... no applications are running and all windows closed.
2D applications are not considered idle.
I ask again, what kind of a boost do you expect when you bridge low-end GPU and CPU together and provide them with a tiny fraction of high-end GPU memory bandwidth? By 2009/10 high-end GPUs have around a terabyte of memory bandwidth. How do you intend to give similar bandwidth to CPU socket without soldering the RAM directly to it using several times wider buses than any socket has today? Perhaps by using Intel developed silicon optics?
Bubba said...
And don't embarass yourself. Link to an AMD source like I did.
Fussion is not next year.
This is AMD's Roadmap for new CPU architecture. AMD Fusion is the first step towards this. This roadmap is not for mobile only.
Fusion was originally expected late 2007 or 2008. A more recent quote from Phil Hester states that “The first Fusion CPUs will be available in prototype in late 2008 and in production in early 2009,”. We will start seeing benchmarks in 2008.
enumae said...
2D applications are not considered idle.
MS Word and Doom 3 are very different in the load they put in on the GPU. I conjecture that MS Word uses no more than 1% utilization of the GPU. Unfortunately, there are probably no power utilization benchmarks for basic office application usage, but I theorize that the results would be very similar to those of an idle desktop.
Well first off it depends on what OS you are using...
But you being the smart guy you are, you would figure out the difference in OS's and how they interact with the GPU.
One would also hope you know how DirectX works for the desktop in the Microsoft OS's as well as some of the enhancements to some Linux desktop environments...
This is AMD's Roadmap for new CPU architecture.
Unless i've gone completely blind, neither the term "Fusion" nor "GPU" is in that slide.
Want to try again?
Bubba said...
Unless i've gone completely blind, neither the term "Fusion" nor "GPU" is in that slide.
Want to try again?
This is obviously way over your head, so I will attempt to throw a couple hints in your direction. Look at the fusion design, then look at the future roadmap. The similarity you are looking for is the integration of non CPU cores. Fusion is AMD's first step in this direction.
I still want to know how will they deliver enough bandwidth to the chip. HT3 is not nearly enough, dualchannel DDR3 is even worse.
Penix
MS Word and Doom 3 are very different in the load they put in on the GPU.
Thats the smartest thing I have ever seen you post!
penix, that's what you get for using the inquirer for your links. It makes you look like a moron.
Like when you link to a platform slide and call it a Fusion slide.
Unless of course they plan on puting drivers on the CPU die also.
But whatever. You're not worth the effort of typing any longer.
Bubba said...
Like when you link to a platform slide and call it a Fusion slide.
In my impatience with your ignorance, I didn't even bother to look at the image Here is the correct slide.
You are so desperately driven to see nothing that you take every opportunity you can to cover your eyes. This is not new news. Sharikou posted an article weeks ago about the planned introduction of AMD's APU design.
It doesn't matter what you want to think. The facts are exactly the same now as they were before you started spouting ignorance. We will start seeing AMD Fusion benchmarks next year, and Fusion is the first major step towards AMD's APU design.
Considering that AMD has to make a huge cut in costs I somehow doubt we would see Fusion benchmarks any time soon. Especially considering that AMD will have to live at least a whole quarter with having its most expensive desktop CPU costing considerably below $200
Ho Ho said...
I still want to know how will they deliver enough bandwidth to the chip. HT3 is not nearly enough, dualchannel DDR3 is even worse.
You ask a difficult question. The GeForce 8800 Ultra boasts 103.7GB/s using GDDR3. I am having trouble finding specs on GDDR3 bandwidth specs, but 22.4GB/s is the number used for both the PS3 and Xbox360, which both use GDDR3. HT3 has a bandwidth of 41.6GB/s. If 22.4GB/s is the correct number, then HT3 has higher bandwidth than what is currently used in high end video cards today.
penix, you don't seem to know much about memory architectures and bandwidths of different system, do you?
"The GeForce 8800 Ultra boasts 103.7GB/s using GDDR3"
8800 Ultra achieves it by having 12 GDDR3 chips soldered to the PCB, that means 32bit channels going from each of those 12 chips straight to GPU. Each chip runs at 1.08GHz DDR or 2.16GHz effective. HD 2900XT has roughly the same bandwidth but using more chips (==wider bus) and lower memory clock speed. HD 2900XTX will have 16 chips at 2.2-2.5GHz delivering 140GiB/s or more.
For the record, at same clock speed there is exactly zero bandwidth difference between (G)DDR2/3/4. Only difference is that using some clever tricks the newer RAM can clock higher. There is also some differences in efficiencies, mostly coming from the fact that newer standards have a lot higher latencies.
K8 has two 64bit channels connected to the CPU running at most 800MHz* delivering up to 12.6GiB/s bandwidth, a tenth of what x2900xtx will have. DDR3 will have working frequencies of up to 1.6GHz effective delivering up to 25.6GiB/s for two 64bit channels. It is possible to deliver more bandwidth by using FBDIMMs as you can connect more channels to CPU. IIRC Intel currently has up to 6 channels delivering conciderably more bandwidth than two ordinary channels.
*) With OC'ing that can rise.
That means todays GPUs have up to 4x wider memory bus and RAM with roughly 2.5x higher clock speed compared to highest-end CPUs. CPUs have been using 2x64bit channels since P4 and K8, without FBDIMM or similar technologies I can't see a way for them to increase the bandwidth without the motherboard prices skyrocketing. Also even FBDIMM is not enough to deliver enough bandwidth for todays GPUs, it will be far worse 3-4 years into the future.
HT is not used for connecting memory pools to CPU and even if you would do it it wouldn't have nearly enough bandwidth to satisfy the demand.
"If 22.4GB/s is the correct number, then HT3 has higher bandwidth than what is currently used in high end video cards today"
Xenon and RSX are far from being high-end GPUs. They are at the level of previous generation lower end of high-end GPUs. Think of 7900GT with half the memory bandwidth or x1900 with almost 1/3'rd of bandwidth and conciderably lower clock speed.
Also as it should be clear, the yet non-existing HT3 is far from being enough for last generation high-end GPUs but we are talking about several years in the future. Can you see HT speed increasing by around 50x during the next 3-4 years?
Long story short, withouth increasing memory bandwidth to CPU sockets by over 50x compared to todays speeds, Fusion is simply a cheaper replacement for integrated GPUs. It cannot compete with high-end dedicated GPUs.
Though there exists another option: switch from rasterizing to ray tracing. It requires considerably less memory bandwidth and can use caches extremely efficiently. Even just a few kb's are enough to cut external bandwidth ten times or more in scenes as complex as in 3dmark2006. As FP power will be considerably cheaper than memory bandwidth this will likely the way where GPUs evolve in the future. Intel seems to have realized that and from the little information currently known it seems as Labarre will be made mostly for ray tracing.
You have successfully proved that you know almost nothing about one of the most important parts of CPUs and arguably the most important part of GPUs, how can we know you know more about the other parts?
It is possible to deliver more bandwidth by using FBDIMMs as you can connect more channels to CPU. IIRC Intel currently has up to 6 channels delivering conciderably more bandwidth than two ordinary channels.
Also, with three or more DIMMs per channel, FBDIMMs can give a sustained throughput of 100%. DDR2/DDR3 can achieve a sustained throughput of 60% to 70% of the theoretical max.
Ho Ho said...
penix, you don't seem to know much about memory architectures and bandwidths of different system, do you?
You make it sound like you've uncovered some great conspiracy. I openly stated that I was unable to find detailed information on GDDR3. I admit when I am ill informed on a subject, unlike someone else who refuses too even when they are clearly wrong. *cough*writecache*cough*
Your research raises some interesting questions, but it is clear that AMD has already solved all the problems that you cannot figure out. AMD would not move to a this architecture unless they had. Fusion is still more than a year away, and AMD is wisely keeping quiet about it.
penix
"but it is clear that AMD has already solved all the problems that you cannot figure out"
Nice logics, AMD hasn't said a thing about it therefore they have a solution. Well, I know that Intel with its silicon optics does have a solution but I haven't heard anything about AMD and I seriously doubt that it has enough recources to it.
AMD itself has said that Fusion is just for the low-end. Its main reason on laptops is to reduce power usage and to reduce costs on low-end PCs. It has never said it would be a competitor for high-end discrete GPUs. For some reason you and few others keep on claiming that it is a competitor. Why do you do it?
core2dude
"Also, with three or more DIMMs per channel, FBDIMMs can give a sustained throughput of 100%. DDR2/DDR3 can achieve a sustained throughput of 60% to 70% of the theoretical max."
Actually with good memory controller it can be more. AMD's DDR1 controller had very high efficiency, around 90% or so. Its current DDR2 one is considerably less efficient being around 70-80%. Intel is limited by FSB speed that is curently lower than theoretical maximum memory throughput.
penix
"Your research raises some interesting questions"
Questions such as?
Btw, it was no research, I know that stuff by heart.
AMD itself has said that Fusion is just for the low-end.
Correct. Fusion and whatever you want to call Intel's comparable product based on Nehalem are designed for mobile and the low end spectrum of the market.
Ati will continue to make discrete GPUs, as will Nvidia. The need for discrete graphics cards for higher end performance will not go away.
Ho Ho said...
Btw, it was no research, I know that stuff by heart.
Liar. To spend so much time on the subject matter that you have memorized dozens of near meaningless specifications in meticulous detail would require the complete absence of a life.
Ho Ho said...
AMD itself has said that Fusion is just for the low-end.
Perhaps low end for AMD, but will crush Nvidia's high end without question.
penix
"Liar"
What makes you think that? Even though I do have rather good memory for numbers (and nonexistent one for people names) these specs are really simple to remember.
If you are interested and know of a good way how I could prove I'm not lying feel free to contact me with either IM or email if you don't want to discuss it here, both contacts are in my profile.
"To spend so much time on the subject matter that you have memorized dozens of near meaningless specifications in meticulous detail
All I need to know is bus widths and clock speeds, everything else is trivial math done in seconds. Also, they are not meaningless for me ;)
For me it is enough to read the specs once to remember most of the important numbers. For me they are really logical and have cerain logics compared to the others. Most of the stuff I know and talk about CPUs I don't look up from the net and also write from memory. Call me weirdo but that is how it is. I'm definitely not a liar.
Do you really think it is so difficult to remember couple tens of numbers every six months or so? You know there are people who know Pi up to thousands of places.
... would require the complete absence of a life"
I do have one and she is the cutest thing I've seen, she is also the reason why I didn't reply you earlier ;)
Of cource besides her I have full time job and a few hobbies.
"Perhaps low end for AMD, but will crush Nvidia's high end without question."
First, what makes you think that? Because they are AMD and therefore must have better performance?
Secondly, have you got any theories what would make them so superior even when considering considerably smaller memory bandwidth? Is it again that thing about AMD being always perfect and in the lead no matter what logics and real-world numbers say?
Unless future GPUs are not based on ray tracing there is no way you could attatch enough memory bandwidth to CPU socket to compete against discrete solutions. Rasterizing is extremely inefficient when it comes to memory bandwidth, there is nothing that can help it.
Also I don't think people would like to have 80W CPU and 160W GPU sitting in the same socket to get similar performance to discrete CPU and GPU. Though perhaps HD series is just for testing how much heat can be cooled down and in the future we would be having >250W Fusion chips in our CPU sockets ...
[/sarcasm]
A fun fact for today: do you know that HD 2900XT with 105.6GiB/s bandwidth is fighting (loosing?) a hard battle against 8800GTS with only 64GiB/s bandwidth? That is even at high resolution and high AA settings where more bandwidth should help the most.
Penis said:
Perhaps low end for AMD, but will crush Nvidia's high end without question.
Just like the 2900 is supposedly crushing Nvidia's 8800 line???
What a fanboi.
Just like the 2900 is supposedly crushing Nvidia's 8800 line???
What are you talking about?! Sharikou provided a link clearly showing the 2900 fragging the 8800 in one game!! How can you dispute these finding from Sharikou?! Hah.
Memory bandwith isn't everything. The 8800 GTS frags anything AMD makes easily. The 8800 GTS 320 offers by far the best performance/$. For people like me that like to play games but don't want to spend more than $300 the 320MB version is just awesome.
randy allen
"Memory bandwith isn't everything"
So it is. Memory bandwidth simply dicdates the maximum performance attainable with given GPU.
Considering the bandwidth difference between GTS, GTX and XT I'd say that XT is not limited by memory bandwidth, just the GPU is not that efficient as competitors
I'd take the 2900XT any day of the week over that unstable piece of crap 8800.
Would you like to have a new PSU to go with that XT?
"Would you like to have a new PSU to go with that XT?"
What's the matter with these AMD-doomed sayers? A AMD 2900XT system takes less than 10W over a NVidia 8800GTS system. Do you need to use a new PSU for that 10W?
Then you better not access your DVD-ROM drive when 8800GTS is under load, because if you do so, then your "lower-powered" system will run out of supplied power! Well, maybe that won't be an issue since the 8800 drivers are so unstable anyway...
So I wonder how the AMD fanboi's explain all the crashes and games not running on the AMD 2900???
Of course, it's fine when the 2900 crashes... after all it's AMD, and they make it perfect before the customers get ahold of it.
Fanbois!
You AMD freaks are almost as bad as the Mac freaks. And that says a lot.
A AMD 2900XT system takes less than 10W over a NVidia 8800GTS system
Fanboy. 8800 GTS 640MB is faster, cooler, cheaper. You really are desperate to link to the same anomaly over and over again!
What's the matter with these AMD-doomed sayers? A AMD 2900XT system takes less than 10W over a NVidia 8800GTS system. Do you need to use a new PSU for that 10W?
Funny, when the Intel processor was taking an average 7W over the AMD chip, it was this huge victory for AMD. People talked about the yearly power savings of the system...
Now "its just 10W"
Yes folks, once again, the wonderful fanbois prove just why they are fanbois.
Good going!
And thanks for proving the point.
No wonder that test was an anomaly. They used hot Nvidia chipset with extreme motherboard having everything but the kitchen sink for 8800 and the cool Intel chipset with mainstream features for 2900. Anybody out of love for AMD could have figured that.
He -
"Fanboy. 8800 GTS 640MB is faster, cooler, cheaper. You really are desperate to link to the same anomaly over and over again!"
How could you call a test that is repeatable an "anomaly"? Why do you expect they use nVidia chipset for ATi cards? And if you think the nVidia chipset is more power hungry, you invalidate the (good) performance shown by the 8800GTX/S, too. So what do you want? An nVidia system that doesn't save much power, or one that doesn't perform much better?
Note that in most tests the performance and power between 8800GTS and 2900XT are minimal. The final verdict lies on their respective prices and driver qualities.
evil -
"Funny, when the Intel processor was taking an average 7W over the AMD chip, it was this huge victory for AMD."
Mr. Doomsaying FUDer, just to like you know that a "processor" that consumes 7W more is very different from a "system" that consumes 9W (not 10W, but less than 10W, if you could read properly).
So repeatable that you can't even find 5 sites showing it in a favorable light in regard to power consumption?
Why do you expect they use nVidia chipset for ATi cards?
If you are saying that in ignorance, I feel sorry for you. If you are saying that in stupidity, I feel sorry for you. You can't run SLI on 975X nor can you CF on 680i.
Most of us don't dual GPU so the least they could have done was for single GPU testing to do it on a single chipset to isolate GPU temp. And even with cooler 975X, R600 cannot hide its ugly head in CF mode.
Note that in most tests the performance and power between 8800GTS and 2900XT are minimal.
I was under the impression that it royally sucked. What are you reading?
The final verdict lies on their respective prices and driver qualities.
Only a fanboy could claim that 2900 is cheaper and has less driver issues;)
meakiki
"So I wonder how the AMD fanboi's explain all the crashes and games not running on the AMD 2900???"
Why anyone else but you are the biggest fanboi?!
Games crash on either platforms, nVidia or AMD/ATi. It's unacceptable on both, and nobody on the AMD side would say otherwise; neither would I.
The facts remain: an nVidia 8800GTS system 1) has graphical performance about the same as a 2900XT, 2) consumes much more power during idle, 3) consumes slightly less under load, and 4) has comparable price to 2900XT, too.
You fanboistic FUDers just can't understand that both products are the same level. No one is going to "frag" another. I am linking that perfectly valid review because first, unlike many others, it's just damn repeatable; second, it perfectly illustrates the true situation.
Now, being a fanboi of you isn't that much a big deal because I see you got (lots of) company here. But calling me a fanboi simply degrades yourself from fanboism to stupidity. Dude, I got multiple nv systems but just one ati simply because nv has much better linux drivers. But my latest addition of AMD 690G onboard video proves to be extremely capable (and cool-running) for my home theatre, with price/performance outclassing anything from nVidia that I had.
As I said, each company has its strength and weakness. What disgusts me is here I see a bunch of FUDers for whatever reasons literally fart from their mouths about invalid claims on AMD/ATi, to the extent that they fire on people who disagree with their FUDs.
Now, I actually agree that our host Sharikou often has flaming claims himself. You come to "correct" him however you like; this is the purpose of the comments and its great. But if you try to stand on the opposite extreme to make your point, then your point is at best worthless and down to despicable. Go open your own blog to babbling about your false claims.
If I were Sharikous, I'd kill any of these extremist articles at sight; but then, this is his blog and maybe he just prefer flames over reasons (from both sides), I don't know.
The facts remain: an nVidia 8800GTS system 1) has graphical performance about the same as a 2900XT, 2) consumes much more power during idle, 3) consumes slightly less under load, and 4) has comparable price to 2900XT, too.
Only a fanboy could come to such conclusions from a single review. Wrong, wrong, wrong, and wrong!
"If you are saying that in ignorance, I feel sorry for you. If you are saying that in stupidity, I feel sorry for you. You can't run SLI on 975X nor can you CF on 680i."
So you agree with my point of view, that one is suppose to run ATi cards on 975X and run nVidia cards on 680i. Now who are you calling ignorant and stupid?
"Most of us don't dual GPU so the least they could have done was for single GPU testing to do it on a single chipset to isolate GPU temp."
You are testing an nVidia GPU against an ATi GPU, and you expect using an nVidia chipset can achieve fair isolation?
"And even with cooler 975X, R600 cannot hide its ugly head in CF mode."
I think you just said most of us won't do dual GPU, didn't you? Well, if he has to have dual GPU, and have them under load most of the time, then be sure to get nVidia SLI.
"Only a fanboy could claim that 2900 is cheaper and has less driver issues;)"
1. I didn't claim anything remotely like above.
2. Only a fanboi would claim the extremely opposite, either.
You can see power consumption differences and performance per watt comparisons here:-
http://img527.imageshack.us/img527/7237/poweruseww1.jpg
As a summary:-
-The HD2900XT is seven months late
-Uses way more power than even 8800 Ultra
-Offers performance lower than that of an 8800 GTS
-Offers inferior image quality
You can see all the proof of this at the HardOCP article:- http://enthusiast.hardocp.com/article.html?art=MTM0MSwxLCxoZW50aHVzaWFzdA==
No comparing silly 3DMark results there, just real world gameplay results. R600 is Ati's Geforce FX, that's all there is to it.
Remember, the folks at HardOCP said this:
Here is what it boils down to. If the Radeon HD 2900 XT performed exactly on par with the GeForce 8800 GTS in every game, it would still be a loser because it draws nearly 100 more watts of power, meaning it is very inefficient. The facts are though that it doesn’t even match the 8800 GTS currently. In every game it slides in underperforming compared to the GeForce 8800 GTS 640 MB, and it does it while drawing a lot more power, as much power or more as an 8800 GTX. Not only that, but a GeForce 8800 GTS 640 MB based video card can now be had for up to $70 cheaper than the Radeon HD 2900 XT. I don’t know about you, but a video card that is cheaper, runs a lot faster and draws less energy just seems like the better value to me.
At this point NVIDIA has, dare I say it, a monopoly over the high-end of computer gaming video card market. If you want the best gaming performance, it is still the GeForce 8800 GTX. The GTX has no competition.
abinstein
1) has graphical performance about the same as a 2900XT
Agreed, if you average the advantages each card has it is about an equal 15% on their respective benchmarks.
2) consumes much more power during idle
No one should relate to 10W as "much more power".
3) consumes slightly less under load
If 10W was equal to "much more power", then 62W should be equal to "much more power" * 6, or... "I need what size power supply!!!".
All jokes aside, you can't claim 10W as huge, and then discard 62W.
4) has comparable price to 2900XT, too.
Comparable...
NewEgg's cheapest HD2900XT cost $419, while their cheapest 88GTS-640MB is $359, and that is without the $20 rebate.
So thats about $80 for equal performance +/-, more power underload and a very slight advantage during idle conditions...
I am sorry but that is not a good start for that graphics card and should not be defended as such.
NewEgg's cheapest HD2900XT cost $419, while their cheapest 88GTS-640MB is $359, and that is without the $20 rebate.
Who cares about the price? You care? You ain’t AMD customer. Even if AMD sells the card for only $100 of course you ain’t want to buy, isn’t it? For you all AMD stuffs are craps and only your beloved Intel stuffs are the best in your fanboism world.
Penis said:
Who cares about the price? You care? You ain’t AMD customer. Even if AMD sells the card for only $100 of course you ain’t want to buy, isn’t it? For you all AMD stuffs are craps and only your beloved Intel stuffs are the best in your fanboism world.
Ha... All I need to do for you is change AMD to Intel and we have Penis personified!
Who cares about the price?
And the dumbest statement award goes to.......PEZAL!!!! Step rite up and take a bow...your stupidity is unparalleled your mom and dad must be soo proud.
Even if "no one cares about price" then the 8800 Ultra would frag AMD all over still. AMD is finished.
AMD BK Q2'08.
evil_merlin without penis said..
Ha... All I need to do for you is change AMD to Intel and we have Penis personified!
Sorry to say, Im not interested with the intel Core got 2 balls+ the Intel penisryn. This Q3, the Intel Penisryn together with its 2 balls will totally get frogged+Tortoised by AMD Barcelona.
And the dumbest statement award goes to.......PEZAL!!!!
And the Donkey title goes to you..
Pezal
Who cares about the price?
I do.
You care?
Yes!
You ain’t AMD customer.
I'm not?
Thats funny because I have been a very strong supporter of ATI products... X1800XT, X1900XT.
For the few years I have been into computers I have been using ATI.
I waited for the RD600 (motherboard), read a review... not worth it.
Then I waited for the HD2900XT, read a few reviews... not worth it.
Even if AMD sells the card for only $100 of course you ain’t want to buy, isn’t it?
If the HD2900XT was $100 I would buy it, even if it was $275 I would buy it, but not $420.
For you all AMD stuffs are craps and only your beloved Intel stuffs are the best in your fanboism world.
How would Intel be relative... Intel does not have a discrete graphics solution?
------------------------------
You should really grow up and dump your blind devotion to any company... When a product with as many delays as the HD2900XT had, it should at least launch performing worth its asking price.
For you all AMD stuffs are craps and only your beloved Intel stuffs are the best in your fanboism world.
enumae:
How would Intel be relative... Intel does not have a discrete graphics solution?
should I say like this "For you all AMD Graphics+CPUs are craps and only your beloved Intel CPUs are the best in your fanboism world." so that you can understand easily and clearly??
And the Donkey title goes to you..
Hahahaha great come back must have taken you long to think of that one...considering you are the same species and all.......
Price is one thing, performance is another, but both are far less important than stability. From the published reviews on both the 2900XT and the 8800GTS, it is clear that neither is perfect, the but 2900 is far ahead of the 8800. A slight lead in performance and a lower price tag means nothing if your computer is restarting every 10 minutes.
In the pass few days there been a flood of 2900XT review.
Most of them negative. I understand that a lot of people were bitterly disappointed because 2900XT didn't prove to be the 8800 GTX killer.
Obviously long waits have made them setting their expectation to high and left them very impatient. Impatient enough to draw rushed conclusions.
In my opinion the 2900XT show good potential. And here a apple to apple review from legion hardware .
Also a 25 pages review from guru3d elaborating the potential of 2900XT.(guru3d is know for their thoroughness).
BTW enumae.
2900XT comes with Half Life 2 : Black Box(probably will be replaced by orange box). A HDMI dongle that let's you stream 1080P HD content and 5.1 HD audio from a single cable. A fair bit of value for +80$.
if your computer is restarting every 10 minutes.
And there are 2 things that might causing your pc restart every 10 minutes.
1. It could be due to your graphic driver problem. or..
2. It could be due to your intel core got 2 balls CPU overheating..
Negroponte blasts Intel over OLPC:
http://www.theregister.co.uk/2007/05/21/olpc_vs_intel/
Wise Investor
2900XT comes with Half Life 2 : Black Box(probably will be replaced by orange box).
I can't find that on NewEgg, please link.
A HDMI dongle that let's you stream 1080P HD content and 5.1 HD audio from a single cable. A fair bit of value for +80$.
I only found a DVI to HDMI adapter, no cable... the cheapest cable was another $15, DVI to HDMI adapter is about $10.
I will admit that if the low end versions of the card have the same functionality (HDMI + 5.1), use less power and are quiet, that the future looks very bright for AMD/ATI.
------------------------
Pezal
So that you can understand easily and clearly??
LMAO!!!
You really need to grow up!
You come across like your a 12 year old who does not really understand the conversation.
If you would like to have an actual debate, please comment on my other remarks, otherwise learn how to make an Anti-Intel statement correctly.
By the way people, I came across this website that is showin intel's roadmanp and found that intel will continue their FSB for along time.(2007, 08, and perhaps 09) Atleast this is what I have encountered here:
http://www.hothardware.com/articles/Intel_P35_Bearlake_Motherboard_And_DDR3_Memory__Asus_and_Corsair/?page=2
I just read about AMD's "next generation" mobile processor. What a joke. Based on ancient K8 processor. Expect that to be totally fragged by the Core 2 Duo of today, no need for Penryn or Nehalem to kill AMD.
Who wants to place bets on how big AMD's next quarterly loss will be?! Or how much market share will lose this time? If AMD's market share declines continue at the same rate they have been they will only have 1% market share after Q4'07.
AMD BK Q2'08.
Tommy said...
intel will continue their FSB for along time.(2007, 08, and perhaps 09)
Intel's FSB is unable to support their current generation hardware. As Abinstein pointed out, Intel's quad core CPU loses a full 40% of it's performance because of this. This means that Intel's savior, the Core 2 platform, is already over the hill and on it's way down.
All the Intelers were claiming that Barcelona was too late, but this is not the case. When compared to Intel, Barcelona is way too early. As soon as Barcelona is released, they will automatically hold the performance crown until 2009. By then Intel will be bankrupt.
well right now rumors float around that bracelona is gona be released in late summer, but barcelona is server only, so i dont think we see any cips for desktop market befor 2008, wich means that intel is only one year late with its own HT like bus system. (when we go after your numbers.)
On the other hand does intel need that new bus system to be competetiv in the 1P and even in the 2P space?
Right now it looks like they are very comfortable with there "ancient" fsb in the consumer market (server market is a other playground, but tbh who of you guys owns a multisocket system with Opterons 8xxx or Itanium ?)
Hornet331 said...
does intel need that new bus system to be competetiv in the 1P and even in the 2P space?
Most definitely, yes. Dual core Opterons have proven to have comperable performance to Intel's Clovertown quad core, and in many cases even beat it by a significant margin. The performance of the Intel FSB has already leveled out. Without a new FSB, performance gains will be severely hindered. Intel will be standing still for the next two years.
HKEPC Reviews the HD2900XT
Here is the link, and it appears they have some new drivers 8.38 instead of 8.36 or 8.37 like many other reviews.
Had anyone else seen this?
Likewise these drivers seem to be much better or more mature at first glance and establish a nice advantage over the 8800GTS-640MB.
Though the concern about power consumption is still valid...
But what is worrying is that the R600 relatively high power consumption, and many users may need to upgrade power supplies, able to cope with this monster treats...
Penix said...
Intel's FSB is unable to support their current generation hardware. As Abinstein pointed out, Intel's quad core CPU loses a full 40% of it's performance because of this. This means that Intel's savior, the Core 2 platform, is already over the hill and on it's way down.
All the Intelers were claiming that Barcelona was too late, but this is not the case. When compared to Intel, Barcelona is way too early. As soon as Barcelona is released, they will automatically hold the performance crown until 2009. By then Intel will be bankrupt.
Yes, I read Abinstein's claims and I also visited the blog too. I saw the benchmarks and one question arose in my mind. What happens to Intel's ancient FSB when a user utilizes every hardware on a computer? For instance, copy files, play games, encode/decode, listen to music, watch DVD, rip music, and etc... Wouldn't there be a huge bottleneck there??? Of course not everybody runs such many instances at once, but just a thought...
Inteler's were probably claiming "late" as an answer to Core 2 duo, but we shall see substantial difference after the release of Barcelona is server side.
enumae said...
Likewise these drivers seem to be much better or more mature at first glance and establish a nice advantage over the 8800GTS-640MB.
As I predicted, the matured drivers have clearly put the 2900 in the lead. The 2900 dominated an unheard of 46/51 tests. Unquestionably a landslide victory for AMD/ATi dispelling all the FUD spread by the fanboys.
Again, why doesn't the power consumption of the 2900 vs the 8800 family matter, but it does when the Intel CPU's use more than AMD CPU's?
As I predicted, the matured drivers have clearly put the 2900 in the lead. The 2900 dominated an unheard of 46/51 tests. Unquestionably a landslide victory for AMD/ATi dispelling all the FUD spread by the fanboys.
you mean they are finaly there where they wanted to start, pricewise the hd2900xt is between a 8800gts and a 8800gtx. With this drivers they actually justify the higher price against the 8800gts.
Still Ati has no card that can touch the 8800gtx, let alone the 8800ultra (even when this crad is a ripoff right now).
And this costs reputation in the entusiast community, which is the main market for such cards.
tommy said...
What happens to Intel's ancient FSB when a user utilizes every hardware on a computer? For instance, copy files, play games, encode/decode, listen to music, watch DVD, rip music, and etc... Wouldn't there be a huge bottleneck there???
Excellent observation. I have not seen any benchmarks on real world non-isolated application performance. I often download and encode in the background while watching DVDs. Perhaps megatasking is much more common that originally thought.
abinstein
"What's the matter with these AMD-doomed sayers? A AMD 2900XT system takes less than 10W over a NVidia 8800GTS system"
Had you read the graph you'd have seen that on the same motherboard XT takes around 62W more power, not 10W as you said.
"And if you think the nVidia chipset is more power hungry, you invalidate the (good) performance shown by the 8800GTX/S, too"
Why do you think that? As XT clearly shows, using more power to do the job doesn't mean it does the job any better.
"4) has comparable price to 2900XT, too."
Sure, you can compare the prices. The result of the comparison would be that GTS is considerably cheaper, around $80 difference as someone pointed out.
"You are testing an nVidia GPU against an ATi GPU, and you expect using an nVidia chipset can achieve fair isolation?"
Yes, because 975X was made in cooperation between ATI and Intel. Though in performance benchmarks there is not a big difference but in power usage comparing similar systems is crucial.
pezal
"if your computer is restarting every 10 minutes."
3) your PSU could be too weak to power the whole thing.
I've never seen CPU overheating to cause a restart. When doing extreme OC on the line of CPU not being stable then yes, I do have had a few restarts but not because of too much heat.
penix
"Intel's FSB is unable to support their current generation hardware"
It all depends on what applications you use. E.g compiling scales nearly linearly with added CPU power and doesn't need much memory bandwidth. Same goes for lots of other tasks. SpecFP does not show how most real world applications scale, just some corner cases.
"As soon as Barcelona is released, they will automatically hold the performance crown until 2009"
What exactly makes you say that? Do you really believe AMD so much? Did you believe them when they promised hard launch of all the GPUs in HD series? What about K10 in middle of the summer? What about perfectly working video drivers?
"By then Intel will be bankrupt."
Who will supply around 80% of the market? AMD can't do that.
penix
"Most definitely, yes. Dual core Opterons have proven to have comperable performance to Intel's Clovertown quad core, and in many cases even beat it by a significant margin"
Then why is AMD loosing server marketshare and Intel selling its Clovertons so well?
tommy
"What happens to Intel's ancient FSB when a user utilizes every hardware on a computer? For instance, copy files, play games, encode/decode, listen to music, watch DVD, rip music, and etc... Wouldn't there be a huge bottleneck there???"
Doing all the things you mentioned together will never bottleneck the FSB. When put together, copying files, encoding/decoding, listening to music, watching DVD and ripping music takes at most around 500-800MiB/s. 1066MHz FSB delivers around 5-6GiB/s in real world, up to 8.5GiB/s theoretically. (penix, I did not researched those numbers, I just know them ;)).
Only thing I'm wondering is what will happen when Intel has its own IMC? It will have significant boost to memory bandwidth, core-to-core communication, much lower memory latency and no need for big caches. To me it spells out huge performance boost and either conciderably cheaper CPUs (smaller dies) or more performance per die (swap cache to cores).
penix
"The 2900 dominated an unheard of 46/51 tests."
Funny that XT didn't scale that well with incrased AA and resolution. In SS2 it even dropped behind G80 series. Isn't that a shame, it has around 60% more memory bandwidth and it still isn't enough.
Also it seems to have massively better results in shader benchmarks but has only a tiny lead in real games. I wonder why is that, perhaps because unbalanced design?
Penix said: "Perhaps megatasking is much more common that originally thought.
ROFLMFAO! yeah. megatasking is as common as the LukeIamyourQuadfather system sold out there. i suggest you refrain from using AMD's marketing excuse for an embarrasing platform.
oh please get away with the magatasking... this is so madeup by the marketing section of amd...
http://www.hardocp.com/article.html?art=MTIzMyw5LCxoZW50aHVzaWFzdA==
ho ho said...
I've never seen CPU overheating to cause a restart.
Over heating leads to cpu errors which can lead to crashing and rebooting. Please do not turn this into another "write cache" argument.
ho ho said...
penix
"By then Intel will be bankrupt."
Who will supply around 80% of the market? AMD can't do that.
Bankrupcy does not always mean the immediate liquidation of the company. There is a chance Intel will still remain operative, but they will no longer be an industry dominator.
ho ho said...
Doing all the things you mentioned together will never bottleneck the FSB.
Yet Intel's inadequate FSB already causes quad core performance to fall short by a full 40% of what it should be. The numbers that you claim to "know by heart", and all your calculations are worth as much as the shit I just flushed down the toilet.
penix
"Over heating leads to cpu errors which can lead to crashing and rebooting. Please do not turn this into another "write cache" argument."
In theory, yes, it can happen in some weird situations. It should not happen when your CPU is running at default speed. Also please note that I said "I have never seen", that doesn't mean I'm the whole world.
"There is a chance Intel will still remain operative, but they will no longer be an industry dominator."
I'd say that whoever supplies 80% of the market dominates it.
"Yet Intel's inadequate FSB already causes quad core performance to fall short by a full 40% of what it should be."
In what benchmarks and usage scenarios?
"The numbers that you claim to "know by heart", and all your calculations are worth as much as the shit I just flushed down the toilet. "
Prove me wrong or your talk won't look much better than how you describe mine.
It should be common knowledge that using fastest CD drives you can get at at most a few MiB/s of data, using fastest HDD's a few tens of MiB/s.
Yet Intel's inadequate FSB already causes quad core performance to fall short by a full 40% of what it should be. The numbers that you claim to "know by heart", and all your calculations are worth as much as the shit I just flushed down the toilet.
Ho Ho, why do you keep on debating with these morons? No matter what you say, they are busy smelling and admiring their own shit. Anyone who thinks that disk access or movie watching is going to cause FSB bottleneck is not worth the keystrokes you are wasting responding to them.
I remember once Sharikou claiming that K8 systems are more responsive to use input because of the low latency of the memory controller!
ho ho said...
In what benchmarks and usage scenarios?
abinstein has an excellent analysis on the scaling issues with Intel FSB. You've already read it and have comments on the page, yet you conveniently forgot it existed.
So spec shows the most used applications? What about analyzing specjbb? Better yet, what about comparing results of the same OS. Also I don't think it is even remotely fair to compare 2P NUMA box against 1P Intel. As you yourself said there is nonexistent talking between the threads that makes the rate scores to scale linearly even on the AMD box, something that you can't see in real world with multithreaded software.
core2dude
"Ho Ho, why do you keep on debating with these morons?"
Because it is fun. When I come home from work I'd like to get some enterntainment. This is one place I visit to get it.
Of cource it would be even more fun if they would actually respond and not make up whole new problems whenever I say something that doesn't fit into their view on the things. I'm still waiting for penix to come up with a "benchmark" for me to see if I was right when I told I didn't make an extensive research on the bandwidth s. Another thing I'd like to know is how can a bandwidth starved Fusion fight against discrete GPUs.
Also penix said that those numbers I gave earlier raised some questions. I'd be quite happy if he would actually dare to ask them.
"Here is the link, and it appears they have some new drivers 8.38 instead of 8.36 or 8.37 like many other reviews."
Nice find, enumae. It shows that with a driver update 2900XT outperforms 8800GTS in most cases.
It seems 2900XT does consume more power under load than 8800GTS (but not 8800GTX). A minimum 550W PSU is recommended by the manufacturer. Interestingly, the card manufacturer solves any potential PSU upgrade problem by free offers of 5.25" drive bay PowerUp! VGA power supply. With the VGA-specific PSU, system PSU requirement reduces to 350W.
The main point here is that, unless on game-only PCs, in most cases the GPU is quite idle, where 2900XT would run cooler. In the fewer cases where full GPU is required, the VGA-specific PSU will cover power draw from the GPU.
Of course, if your system has 500W or lower PSU, and not a free 5.25" bay, then the only high-end graphics card you can go with is 8800GTS.
True fanboism is seen on those who used to love Intel+ATi (because nVidia was so supporting AMD), but after AMD buying ATi (something these fanbois all groan about and hate) they switch flag to Intel+nVidia overnight.
You'll know it's you when you read this. No need to raise your hands but just think for a while yourself - if this is not fanboism what is?
Now I can understand people who bad-mouth AMD while shorting its stocks. It's just (foolish) business and unless you are someone big (which you definitely aren't) your efforts are a total waste. But I can't understand people who like or dislike a product simply because the manufacturer's mark changes from one symbol to another. These people are no different from religious extremists who care none of the wisdom but the names of their god.
Now I can understand people who bad-mouth AMD while shorting its stocks. It's just (foolish) business and unless you are someone big (which you definitely aren't) your efforts are a total waste. But I can't understand people who like or dislike a product simply because the manufacturer's mark changes from one symbol to another. These people are no different from religious extremists who care none of the wisdom but the names of their god.
... and you write this sentences in this blog... oh my....
I find it very odd that ATI would ship its chip to most of the press and product with certain driver then a few sites use the lastest benchmark optimized driver and AMD fanboys declare victory. In 6 months they couldn't beat the 8800GTS and now they suddenly can in a week. Those declaring victory should wait for the dust to settle. Maybe 2600 will be better.
enumae here is the black box deal explaine by Guru3d.
In case you miss it, Guru3d's 26 page review is very informative. You should check it out.
Normally DVI to HDMI switch supplies picture only, separate cables are needed for audio.
However, with 2900XT HD, the dongle included in the box can stream image and audio signal togeter.
NO OTHER DVI TO HDMI switch can do this.
Who cares? It's still a slow piece of crap card. I've had my 8800 GTS for six months now and AMD only just now releases a card that competes with it and costs more than $50 more? Pathetic. Perhaps this is AMD's plan for Barcelona too! Wait six or more months, then release a CPU that competes with Intel's third fastest and claim no one is interested in faster parts!
What's with AMD first saying they were delaying R600 so they could launch a whole line of DX10 cards? Then Henri Richard said AMD doesn't do paper launches. Yet on AMD's site they list an HD2400 and HD2600 but you can't buy one until late June or early July. Pathetic. Meanwhile, Intel and Nvidia have been executing perfectly lately, releasing everything on time or ahead of schedule.
AMD = Always More Delays.
AMD BK Q2'08.
wise investor
"NO OTHER DVI TO HDMI switch can do this.
Because this is not in the standard of DVI. AMD just hacked it in.
abinstein said...
True fanboism is seen on those who used to love Intel+ATi (because nVidia was so supporting AMD), but after AMD buying ATi (something these fanbois all groan about and hate) they switch flag to Intel+nVidia overnight.
wow, you just failed to realized the opposite (or blinded by the amd-fanboism)... don't you see those ppl in AMD camp (including you) starts to talk bad about NVIDIA after the purchase? Some actually prased NVIDIA (the host) before this. you can find quite a number of such posts here (again, including yours). what a hypocrite.
Wise Investor
Thanks for the link.
NO OTHER DVI TO HDMI switch can do this.
I think the part to focus on is not the adapter, but the fact that this is a video card that has 5.1 sound running through the DVI plug.
Also, here is a link to further testing of the HD2000 series of video cards, they look small and quiet and perfect for an HTPC.
enumae
"I think the part to focus on is not the adapter, but the fact that this is a video card that has 5.1 sound running through the DVI plug."
What happens to my high-quality $200 sound card if I want to see some HD quality video that is shown through the HDMI?
Ho Ho
What happens to my high-quality $200 sound card if I want to see some HD quality video that is shown through the HDMI?
Is there anything that states you have to use the sound from the HDMI cable/ATI graphics card for your sound?
I like you have an expensive soundcard (X-Fi), and have 5.1 surround for my gaming (Z-5550) and while the new HD2900XT and future models have 5.1 sound, I do not think the quality would be as good as my X-Fi card, nor would I use the speakers that would come in a monitor for my gaming or music.
I feel that these cards are mainly for HTPC or other SFF's aimed at people who want the functionality and ability to do these things without some of the complexity of using independent hardware.
enumae
"Is there anything that states you have to use the sound from the HDMI cable/ATI graphics card for your sound?"
I moght be mistaken but I think I read that from somewhere that if you are watching a DRM'ed movie that must be played throug HDMI you will have to use the integrated sound chip on the GPU and cannot use your other sound card.
E.g my friend has AuzenTech HDA X-Plosion 7.1 DTS Connect, soundcard way better than even X-Fi, at least for movies and games. There is no way that cheap integrated thingie can match the quality of that card.
Of cource as games do not need to use HDMI you can use your dedicated sound card for them. Though I wonder if you have to switch the cables whenever you want to use DVI instead of HDMI and external sound card vs the one on GPU.
Ho Ho
I wish I knew more about this stuff man, sorry I can't really help.
Post a Comment
<< Home