Monday, September 25, 2006

Hector Ruiz bullish on PC market

He reckons that 84% of the global population is not connected to the internet.

Kids equipped CM1 will be able to connecting to the net via satellites, for free.

BTW, AMD wrote a book for Intel, titled "Multicore processing for dummies". The intended readers are Intel's architects and other engineers.

17 Comments:

Anonymous Anonymous said...

From the article:
Intel also destroyed profit margins at computer makers with its Intel Inside campaign, which made its brand the determining factor in consumer purchasing decisions, Ruiz says. [deletia]Deprived of the ability to make products sufficiently different from those of rivals, personal computer makers battled on price, destroying their profit margins, Ruiz says.

Hmm, subsidizing marketing campaigns for the manufacturers hurt profit margins... a bit of a stretch, no? There are plenty of ways to differentiate your Intel-based box other than the CPU. Intel Inside did not require the use of Intel boards- just the CPU. Lots of other goodies in there. Of course, the bulk of the market only cares about price as a differentiator. Intel did not create that behavior- in fact, it was far better for them when CPUs captured the bulk of the BOM of a system as opposed to GPUs now. You do realize that GPUs are a system differentiator don't you? You didn't expect Intel to subsidize marketing of AMD CPUs did you?

"How could our industry possibly have reached mature growth rates when only 16 per cent of the world is connected" to the internet?

Last I checked, internet connectivity has little correlation with the TAM (total available market) for computers. There are PLENTY of computers out there not on the internet. Mature growth rates are more likely a result of saturation of 1st world markets/being at the end of a buy cycle in 1st world markets combined with overall system pricing that is still unaffordable for the bulk of the third world. Let's face it, when you are trying to decide between food, shelter, clothing, and computing... I think the computing isn't even going to be on the radar.


More in line with the alleged topic of this blog, I'm curious why there has been no discussion of the study showing 32bit transaction serving outperforming 64bit on SUSE32/64 on 2way Opterons. Was there something about this particular test that exacerbated the overhead of 64bit computing. I know when I was a big iron user, 64bit was only enabled when you needed it- because otherwise the 5-10% overhead penalty just meant you ran slower. Still true? Or is this not enough of an "Intel BAD, AMD GOOD" topic to discuss here?

11:42 AM, September 25, 2006  
Anonymous Anonymous said...

"SES Global has made a $2m cash donation and promised to provide capacity on its global satellite network free of charge."

"overall, Intel will spend $1bn over the next five years with its World Ahead Programme", which was started earlier this year and "aims to help close the digital divide between developed and developing nations"."

Satellite is expensive and slow.

12:00 PM, September 25, 2006  
Anonymous Anonymous said...

"More in line with the alleged topic of this blog, I'm curious why there has been no discussion of the study showing 32bit transaction serving outperforming 64bit on SUSE32/64 on 2way Opterons. Was there something about this particular test that exacerbated the overhead of 64bit computing."

I saw this study somewhere else and looked at the results out of curiosity. The interesting bit was that for that particular application, if the number of users conducting transactions/min(?) was between about 200-300, 32 bit was faster by a small percentage. Once the number of transactions exceeded this by even a small amount (50, if I recall) 64 bit came out ahead and stayed ahead up to 600 transactions/min(?). I thought the results were kind of odd.

I do a lot of numerical analysis, and some times 64 bit will help, sometimes it is of zero benefit. You almost have to test your particular appplication.

Of course in the real world most experimental data is only good to two or three decimal places, at best. So, 32 bit is more than sufficient in terms of accuracy.

1:06 PM, September 25, 2006  
Anonymous Anonymous said...

Here is the results table from www.worlds-fastest.com. System was a 2CPU Opteron, running Apache2 and SUSE LINUX10, either 32 or 64bit version:

Usrs 32b T/min 64b T/min %faster
1 515 515 0
64 5311 5086 4
128 6405 4932 30
192 6173 4847 27
256 6354 4634 37
320 5325 4574 16
384 5058 4393 15
448 4169 4151 0
512 4151 3883 7


My guess is that for small user #s, the load difference is really too small to have a measurable difference. For medium user #s, you can really see the cost of carrying 64bit overhead when you don't need it for large workloads/datasets. As user #s get larger, the 64bit overhead becomes less of a burden, and at some point beyond 512 users, I would expect that 64bit would take the lead- assuming that there is no physical speedpath limiter in the system.

Delving into the whitepaper, it would seem that they believe the convergence of performance for large loads is due to exiting cache and requiring disk I/O:
Note that the 32-bit operating system produced higher throughput at most user load levels. The 256 user level shows the largest percentage difference favoring the 32-bit operating system. At the 128, 192 and 256 user load levels we believe that much of the disk I/O was being serviced from the kernel’s cache. As the number of emulated users increases, larger and larger areas of the disks are being accessed and we believe the speed of the physical disk I/O becomes the limiting factor and causes the throughput numbers to converge.

This fits with my premise- the average user does not benefit, and often pays a performance premium, for using 64bits over 32. The real drivers for 64bit computing remain firmly in the enterprise space (aka big iron)- circuit design/simulation, tapeout, large relational databases, scientific computing/simulations, and not gaming or home video editing. Thoughts? Or is this of no interest to our host?

2:37 PM, September 25, 2006  
Anonymous Anonymous said...

Sharikou I was wondering if you could tell us you opinion of Apples next OS, its native 64bit, and would seem to be very attractive, and also your opinion of how Apple is gaining market share.

Thanks

3:38 PM, September 25, 2006  
Blogger Sharikou, Ph. D. said...

Sharikou I was wondering if you could tell us you opinion of Apples next OS, its native 64bit, and would seem to be very attractive, and also your opinion of how Apple is gaining market share.


I think Apple has great potential as a desktop OS vendor. Windows is getting more and more bloated. Windows is carried by inertial and the enormous amount of software available for that platform. However, as we move to more and more open source software and web based computing, the reliance on Windows will be greatly reduced. Apple needs to build more market share to establish the critical mass. If Apple can achieve 10% market share, Microsoft will be in major trouble.

3:57 PM, September 25, 2006  
Anonymous Anonymous said...

Is it true that Intel quad core is a copy and paste job ??

4:47 PM, September 25, 2006  
Anonymous Anonymous said...

Thanks Sharikou.

I am sure you have seen this, but if you had not, here it is.

It may reach 10% by the end of next year.

You may not like this statement but I have to believe that alot of this stems from using Intel (if I understand this correctly) and the ability to use Boot Camp.

I know it wasn't Intel directly but it was swithing to an X86 processor right?

I myself am really contemplating getting a Mac, they look great and so does Leopard.

4:59 PM, September 25, 2006  
Blogger Sharikou, Ph. D. said...

You may not like this statement but I have to believe that alot of this stems from using Intel (if I understand this correctly) and the ability to use Boot Camp.


Another condition for apple to be successful is to separate software and hardware and become an OS vendor. Otherwise, Mac will be always a niche.

6:10 PM, September 25, 2006  
Anonymous Anonymous said...

http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/
You've seen Tom's preview
http://www.dailytech.com/article.aspx?newsid=4317
Now Anandtech's, but instead of QX6700 at 2.66, it's Q6600 at 2.4.
It consumes 44 more watts under idle without power management compared to X6800, 21 watts over X6800 under load, which is inline with mainstream Clovertowns at 2.33/80W TDP..
85% improvement in Studio Max, 76% in Cinebench, 60% in WME9.

"Overall Intel’s Kentsfield performs as expected. It will scale very well in multi-threaded applications such as 3D Studio Max, Cinebench and other 3D modeling applications or encoding applications. Unfortunately, unless the application is multi-core aware or optimized for multi-threading the performance gains are minimal if not absent. While the move to quad-core hardware may be exciting, software support is still trailing behind. "

7:49 PM, September 25, 2006  
Blogger 180 Sharikou said...

Another condition for apple to be successful is to separate software and hardware and become an OS vendor. Otherwise, Mac will be always a niche.

You have not understood Apple's strategy at all. Hardware is only a stepping stone for their end business goal which is to be the largest retailer/distributer of digital content in the world. If you want a thorough analysis, read my blog post here:

http://sharikou180.blogspot.com/2006/09/apple-effect.html

1:58 AM, September 26, 2006  
Blogger Ho Ho said...

"Of course in the real world most experimental data is only good to two or three decimal places, at best. So, 32 bit is more than sufficient in terms of accuracy."

Most FP calculations not done in SIMD units are made in 80bit, the ones in SIMD units are either 32bit (single percision) or 64bit (double percision). CPU general purpose register size has zero effect on the accuaricy of floating point calculations.


"This fits with my premise- the average user does not benefit, and often pays a performance premium, for using 64bits over 32."

I've heard storiest that going to 64bit can increase memory load by up to 40%. Though that is probably rather extreme. Something like 10-15% is more probable, though I haven't measured it myself. As always, it greatly depends on the application.

"Another condition for apple to be successful is to separate software and hardware and become an OS vendor."

About half a year ago I predicted it would happen by 2010. Let's see if I was right.

On the other hand, Linux is gaining momentum too. Perhaps not as fast as Apple software but rather fast anyways. Every week I hear of several people who have either dualbooted or entirely moved to Linux and that is just from a couple of small forums with a few thousand users.

2:14 AM, September 26, 2006  
Anonymous Anonymous said...

Here comes that expensive AMD 4x4 SUV:

"Also, later on AMD plans to unveil Athlon 64 FX-70 (2.60GHz, 2MB L2 cache [1MB per core]), FX-72 (2.80GHz, 2MB L2 cache [1MB per core]) and FX-74 (3.00GHz, 2MB L2 cache [1MB per core]) microprocessors in 1207-pin form-factor for AMD 4x4 platform. Even though the cost of a 4x4 system will be very high, as AMD Athlon 64 FX-series processors cost around $1000 each, however, the 4x4 represents “performance at any cost” approach, which should dethrone the Intel Core 2 Duo and Extreme processors."

So we see AMD 4x4 is *exactly* what I said it was -- a poor man's 2P system designed to extract revenue from rich gamers.

4x4 even uses the same sockets as a normal 2P system.

So AMD may be getting a few developer quad-core chips ready to sell so they can vapor launch 4x4 and quad-core in 4Q06, stalling Intel quad-core sales until AMD can ramp their volume in 2Q07.

One can expect AMD's initial 4x4 systems to be way overpriced. So the smart buyer will wait on quad-core until mid-2007 when prices have dropped and the bugs have been worked out.

6:37 AM, September 26, 2006  
Blogger S said...

Hector Ruiz seems to be your 'Guru' in creating FUD. Your styles match.

While Hector has been good in turning around AMD and bringing it on Par with the leader Intel, it will be interesting to see how he will bring along AMD in its new leadership Avatar. As it seems from his utterances in the part few weeks, he seems to be floundering.

All the indications from IDF are that Intel will take back the Technical leadership while Hector is busy talking up AMD.

7:27 AM, September 26, 2006  
Anonymous Anonymous said...

I think Apple has great potential as a desktop OS vendor. Windows is getting more and more bloated. Windows is carried by inertial and the enormous amount of software available for that platform. However, as we move to more and more open source software and web based computing, the reliance on Windows will be greatly reduced. Apple needs to build more market share to establish the critical mass. If Apple can achieve 10% market share, Microsoft will be in major trouble.

That's about the only thing you've ever said that I've agreed with 100% (as I type away on my Conroe-based system).

5:01 PM, September 26, 2006  
Anonymous Anonymous said...

"So we see AMD 4x4 is *exactly* what I said it was -- a poor man's 2P system designed to extract revenue from rich gamers."

Then what *exactly* you said was wrong. AMD officially said the two CPUs plugged into a 4x4 could cost less than $1000 combined (i.e., not $2000 as you seem to imply). The xbit lab seem to have the misleading information which just match your guesses.

In any rate, it is now too early to say 4x4 *exactly* like anything - wait a month or two when it comes out at least.

10:57 AM, September 27, 2006  
Anonymous Anonymous said...

"for small user #s, the load difference is really too small to have a measurable difference."

The network delay dominates processing time.

"For medium user #s, you can really see the cost of carrying 64bit overhead when you don't need it for large workloads/datasets."

It's mostly likely due to cache trashing. 64-bit apps use more memory especially if they are not well-implemented (i.e. simple translation from 32-bit ones).

"As user #s get larger, the 64bit overhead becomes less of a burden, and at some point beyond 512 users, I would expect that 64bit would take the lead- assuming that there is no physical speedpath limiter in the system."

No. Mostly likely the bottleneck becomes context switch overhead. That means the cache is now thrashed very often anyway, so the disadvantage on the (poorly implemented) 64-bit codes aren't that obvious.

"Delving into the whitepaper, it would seem that they believe the convergence of performance for large loads is due to exiting cache and requiring disk I/O"

You really don't need disk I/O for simple tests like these. How many pages and bytes in total are they serving? Do they change the pages dynamically, and how? If not, Apache is pretty smart to cache all the disk contents in memory.

"At the 128, 192 and 256 user load levels we believe that much of the disk I/O was being serviced from the kernel’s cache."

Even if there are 1000 users, if the site has only 10 page, still the same amount "kernel's cache" is required.

"This fits with my premise- the average user does not benefit, and often pays a performance premium, for using 64bits over 32."

No, that's because Apache WAS designed and implemented as a 32-bit application. Had you tried serving the pages with security, you'd have found that many algorithms in OpenSSL perform *much* faster in 64-bit than in 32-bit.

I am sure as 64-bit machines become more prevelent (now that Intel starts calling AMD64 as "Intel 64"), people WILL optimize applications for 64-bit, and then you will see a completely different picture.

11:21 AM, September 27, 2006  

Post a Comment

<< Home