Wednesday, April 18, 2007

Anand learnt his lesson the tough way

Instead of being the button-pushing pumper he was during the Conroe IDF, Anand did some more testing this time and exposed Intel's sick lies. Intel claimed 45% performance increase on gaming* with the 45nm Penryn.

However, Anand showed almost %0 performance boost in modern 3D games. The 3.2GHZ 45nm Penryn scored 145.3 FPS in HL2, and the 2.93GHZ Conroe did 131.3 FPS, the gain was 10%. However, that was with a 10% clockspeed increase. Clearly, Penryn shows no IPC advantage at all.

Fundamentally, Penryn is just a dumb shrink coupled with some more cache.

Intel's Q1 saw revenue below $9 billion. This was achieved when AMD is in a transition period, and Intel had all the cards.

Soon, Intel will be finished off by K10.

Meanwhile, people are flocking to the 3GHZ X2 6000+. And, look at this deal.

Some reader pointed out a cute observation. A 2x Barcelona ought to be enough to frag anything Intel will have in the future. Eight K10 cores has 8x the performance, nothing in Intel's future roadmap can scale that high.

* I hope Intel wasn't using Microsoft Minesweeper for the benchmark.

22 Comments:

Blogger AntiFanboy said...

"The 3.2GHZ 45nm Penryn scored 145.3 FPS in HL2, and Conroe did 131.3 FPS, the gain was 10%. However, that was with a 10% clockspeed increase. Clearly, Penryn shows no IPC advantage at all.'

You do realise that was at 1600x1200 right? Do you understand the concept of GPU bottleneck?

At 1024x768:
3.33GHz Yorkfield 190fps
3.2GHz Wolfdale 170.8
2.93GHz Conroe 143.6

So that is an 18% increase in framerate from a 9% increase in clockspeed for the Wolfdale.

Wow, funny you didn't mention THAT, huh?

I think your Ph.D may be in cherrypicking. Come on, get back up that tree Doctor, we got more work to do! ;)

9:04 PM, April 18, 2007  
Blogger AntiFanboy said...

"Meanwhile, people are flocking to the 3GHZ X2 6000+."

Hahaha, is that the same X2 6000+ that uses more power than a QX6800?! And the same X2 6000+ that gets 'fragged' by Intel's mid range E6600 chip? LOL
http://techreport.com/reviews/2007q2/core2-qx6800/index.x?pg=13
http://xbitlabs.com/articles/cpu/display/dualcore-roundup_8.html

Hey Sharikou, I was digging through your earlier articles and I found this:
http://sharikou.blogspot.com/2005/10/why-isnt-doe-imposing-power.html#links

Why isn't DOE imposing power consumption limits on CPUs?

Why? Good question!

Talk about your past coming back to haunt you! With the X2 6000+ (and god forbid) QuadFX burning up the watts, you musn't be able to sleep at night with such excessive power wasting going on!

Oh, good for you Sharikou, I didn't know you cared so much for the environment. How if you can get into Hectors pants and tell him to cut the ridiculous X2 6000+ and QuadFX TDPs in half, that would be great! The world will thank you for it, no more blackouts!

Hahaha.

See ya Doc.

9:12 PM, April 18, 2007  
Blogger Intel Fanboi said...

I propose a new topic for this thread. What does everyone think is Sha-Ri-Kou's most memorable quote. Roborat had some good ones today; "recoiling to make a huge comeback" and "pulling back to make a punch". Post your favorites!

9:15 PM, April 18, 2007  
Blogger Dr. Yield, PhD, MBA said...

That would be 8 of the past 16 posts with direct links/references to NewEgg.

Welcome to the Journal of Pervasive NewEgg Blue Light Specials, hosted by Sharikou!

9:16 PM, April 18, 2007  
Blogger Sharikou, Ph. D said...

You do realise that was at 1600x1200 right? Do you understand the concept of GPU bottleneck?

Linearly scaling with clockspeed, GPU bottlneck was not reached.

9:18 PM, April 18, 2007  
Blogger AntiFanboy said...

"Linearly scaling with clockspeed, GPU bottlneck was not reached."

How do you explain the 1024x768 numbers then? 18% fps gain from 9% clockspeed increase? Hmm?

Oh, wait, you can't explain it, so you'll just ignore it altogeher.

Sorry Doc, nice try, next!

9:21 PM, April 18, 2007  
Blogger lex said...

Sharikou,

DUDE even the "Ph"ony "D"octorate is no longer worthy for you.

Can't wait to see them AMD numbers. Probably just a ploy heah? Suck big losses on huge drop in revenue and market share to trick intel to continue to make crappy products like C2, penrym and silverthorne. Damm INTEL will probably just cancel Nehalem and put the team back to work on Tejas. Then AMD will surprise them with that Barcelona.

Too bad Barcelona is just a rebadged K9 or better known as a DOG.

Woof woof Phony

9:23 PM, April 18, 2007  
Blogger Intel Fanboi said...

Here is my favorite of all time:

"INTEL is 5 generations behind AMD" posted on February 01, 2006.

In summary it was a list of five technologies where Intel was possibly one generation behind AMD, not one overall technology where they were five generations behind.

9:24 PM, April 18, 2007  
Blogger Sharikou, Ph. D said...

How do you explain the 1024x768 numbers then? 18% fps gain from 9% clockspeed increase?

The lower resolution result was purely cache effect. But no gamer would play with that kind of low resolution.

9:24 PM, April 18, 2007  
Blogger Sharikou, Ph. D said...

"INTEL is 5 generations behind AMD" posted on February 01, 2006.

It's still largely true. Once K10 is out, you will see how big an architectural advantage AMD has.

Intel bought some time with large cache with the modified Pentium III core. However, as we get deeper into the mult-core era, AMD's scalable architecture will excel.

9:26 PM, April 18, 2007  
Blogger AntiFanboy said...

"The lower resolution result was purely cache effect. But no gamer would play with that kind of low resolution."

Rubbish. The load on the CPU is identical at 1024x768 and 1600x1200. The only difference is the added load on the GPU.

You clearly don't understand the concept of GPU bottlenecking, or how games are rendered for that matter.

You're fighting out of your league Doc, go back to explaining how Intel will BK in 2Q08 instead, this should be fun.

9:30 PM, April 18, 2007  
Blogger AntiFanboy said...

http://www.firingsquad.com/hardware/amd_athlon_64_x2_6000/page5.asp

Hey look Doc, the X2 6000+ is only 10% faster than the X2 3800+ in Q4 at 1600 x 1200?!?! What's going on?!

Of course, at 1024 x 768 the X2 6000+ shows a much more impressive 33% gain, but that is a 'cache effect' and nobody games at that resolution anyway right?

A 10% gain from a 50% clockspeed increase?! SURELY NOT? OMGWTF?! WHAT IS AMD DOING?!?!

What did I tell you about your past haunting you Doc. Be careful what you say, it can easily turn on you. :P

Stop 'fragging' yourself in the foot, it's embarassing. ;)

9:38 PM, April 18, 2007  
Blogger Giant said...

Gotta love those AMD fanboys. Since C2D wins all the benchmarks they claim that C2D was just built for benchmarks, and that Intel is bribing all the tech sites!

11:27 PM, April 18, 2007  
Blogger ThomasHOL said...

“Intel's Q1 saw revenue below $9 billion. This was achieved when AMD is in a transition period, and Intel had all the cards.”

Wow great way to cover the results without covering it. You completely forgot to mention that they made 1,6 billion in net income gained marketshare according to Isuppli. Besides this they reduced their inventory of old chips.

Also please don’t think the numbers from 1 game at high resolution is anything to go by. I can easily find 20+ games that would show no performance gain as they are limited by the GPU. If you must take the CPU into consideration in gaming you must compare the lower resolution results so you don’t get to much GPU limitations. Even at 1024*768 I’m sure that there are games that are GPU limited if you turn on AA and AF.

1:34 AM, April 19, 2007  
Blogger Evil said...

Once again, I will ask...

Why is it perfectly fine for AMD to lower prices, but when Intel does so, the Ph(ake)d fanboi is all doomsaying?

it's QUITE clear you have no clue with your "Linearly scaling with clockspeed, GPU bottlneck was not reached" quote, it's quite simply bull, and you know it is.

You claim no gamer would be playing at 1024x768, but in fact, looking at Valve's records, it is in fact the most popular resolution. http://www.steampowered.com/status/survey.html

You sir, are nothing more than a liar.

7:54 AM, April 19, 2007  
Blogger Evil said...

Sharikou, Ph. D said...
It's still largely true. Once K10 is out, you will see how big an architectural advantage AMD has.

Intel bought some time with large cache with the modified Pentium III core. However, as we get deeper into the mult-core era, AMD's scalable architecture will excel.


Do you REALLY think Intel is standing still and not working on new technologies?

Are you really that damaged in the soft melon on your neck?

You hand pick what you want and ignore the rest of the facts to distort the truth in any way you see fit.

Again, you are nothing more than a charlatan.

7:57 AM, April 19, 2007  
Blogger Giant said...

Some reader pointed out a cute observation. A 2x Barcelona ought to be enough to frag anything Intel will have in the future. Eight K10 cores has 8x the performance, nothing in Intel's future roadmap can scale that high.

Err, oh no, of course not. Pat Gelsinger never said that Nehalem would scale to eight cores with CSI.

Sharikou also continues to suggest that Intel would try and load eight cores via a FSB. That is stupid. They would never even bother trying with CSI coming next year with Tukwila and Nehalem.

11:07 AM, April 19, 2007  
Blogger ElMoIsEviL said...

"The lower resolution result was purely cache effect. But no gamer would play with that kind of low resolution. "

Cache effect? OMG don't tell me you're one of those idiots who thinks that games fit entirely in the CPU's cache?!

It was GPU bottlenecking! Pure and simple.

1:52 PM, April 19, 2007  
Blogger ElMoIsEviL said...

"Intel bought some time with large cache with the modified Pentium III core. However, as we get deeper into the mult-core era, AMD's scalable architecture will excel."

Not true, you see Intel's Core 2 architecture see's no problems when operating a singe socket platform. It's not the Cache size that propels Intel's Core 2 to dominance, it's the ability to send data through the cache rather then through the FSB. So the size does play a part but is not the main reasoning for the large performance increase. Intel's Core 2 basically uses the Cache for coherency.. as a communications buffer. And this buffer has a lower latency and higher bandwidth then any current Hypertransport/FSB implementation.

1:56 PM, April 19, 2007  
Blogger ElMoIsEviL said...

Sharikou claims to hold a PhD but fails to even realise the basics of why Core 2 is able to perform the way it does.

A larger Cache is needed to keep the coherency stable on the Penryn quadcore's for a VERY simple reason... there are 4 cores rather then the 2 present on the Core 2 Duo. Penryn also comes with ALU tweaks, a brand new process (45nm) and new instructions (SSE4). All in all it's no slouch and should hold it's own against the K10 on a performance per clock basis (losing but not by much) while beating K10 on total performance due to the higher clocks it will be able to achieve vs. K10.

1:59 PM, April 19, 2007  
Blogger ElMoIsEviL said...

"Sharikou also continues to suggest that Intel would try and load eight cores via a FSB. That is stupid. They would never even bother trying with CSI coming next year with Tukwila and Nehalem."

What Sharikou and others who post anything to place Intel in a bad light fail to realise is that Intel changed gears with the coming to power of Paul Otellini. Intel went from a primarly marketing driven company to a performance oriented company (because Paul was an engineer and not a businessman). As such Intel is being quite aggressive and of course a company that size with that many resources has given Paul the ability to mold into a true powerhouse. Intel won't make as much raw dollars (no, marketing team was cut in 1/2) but will undoubtably remain competitive with AMD for some time to come.

2:03 PM, April 19, 2007  
Blogger Tanrack said...

Pointing to the cache and calling the C2D a one trick pony is just daft. Celerons have always had less cache than the mainstream part and they have always lagged in performance compared to the mainstream part. It was so in the P3 days, it was so in the P4 days where they sucked badly and it is so now. Intel uses cache to keep the CPU busy, if it works it works. Why should the man on the steet care?

1:30 AM, April 20, 2007  

Post a Comment

Links to this post:

Create a Link

<< Home