SUN Galaxy x4600 smashes Superdome
Some say SUN's servers are too expensive. Let's check SUN's pricing on the 8P 16 way 4U rackmount x4600 server, which outperforms HP's 16P Superdome with Itanium 2 1.5GHZ by up to 20%.
For $67,495.00, you get 8 Opteron 885 2.6GHZ dual core CPUs, 32 GB memory, 2x HDD with 4GbE ports. For $35K, you get 4 processors and 16GB ram. In case you don't know, the 16P Superdome costs over $1 million.
Go to dell.com and try to configure a 2P Woodcrest server with the same amount of memory, your total runs up to $40,000.
Why are people paying $1 million for one 16P Superdome, an amount of money that can buy 25 fully loaded 2P Woodcrest machines? Are they stupid? No, they pay 25x because the 16P Superdome is 25 times more productive than a 2P Woodcrest. One wolf is more powerful than 25 sheeps combined -- because sheeps don't have technology to integrate the power of 25 sheeps into one wolf.
But, the x4600 is 20% faster than the 16P Superdome, and only cost slightly higher than a 2P Woodcrest. Folks, this is something fundamental.
The SUN x4500 is also reasonably affordable for most people. You get 2x Opteron 285, 16GB ram, 24TB storage for less than $70K. If you buy 10 of these, the price drops to $47K per box. The x4500 with Solaris 10 ZFS can sustain 2GB/s read data rate, which means the server can examine every byte of the 24TB in just three hours. Suppose your average file length is 1GB (such as those huge Yahoo mail mbox files), and your application needs to search within the file, the x4600 can do it in 0.25 second on average.
I think that the x4600 is perfectly suited for the online video industry, one such box can store 20,000 DVD quality movies (compressed), or 40,000 VHS quality movies. There are zillions of web sites that are streaming video, each of them needs at least one such box. Maybe SUN can bundle some video streaming software with the box and make it a turnkey solution.
65 Comments:
Sharikou, master of illusions.
A PowerEdge 1950 model at dell.com with two Woodcrest 3GHz and 16GB (8x2GB) memory costs $9,700. The price jumps $26,500 if you configure 32GB because it's based on non-commodity 4GB DIMMs. Sun's pricing is all based on 2GB DIMMs.
BTW, can you tell us how many 2P need 32GB?
The way I look at it, for $40k, I can buy four PowerEdge1950 with a total of 8P and 64GB of memory and win big on performance.
The way I look at it, for $40k, I can buy four PowerEdge1950 with a total of 8P and 64GB of memory and win big on performance.
Dude, four 2P low end servers are four 2P low end servers. They don't add up to 8P. A 8P server is more powerful than sixteen 2P servers combined.
"A 8P server is more powerful than sixteen 2P servers combined."
Dude, you are real funny...
Dude, you are real funny...
Actually, an 8P Opteron server outperforms thirty two 2P Woodcrest servers -- unless Intel can rewrite the OS and other software and link the 2Ps together.
Today, clustering technology is still very primitive. You can try cluster 32 Woodcrest together with GbE and I can safely bet it will be slower than an 8P Opteron in many many applications. Well, I forgot to mention that Woodcrest's limited bandwidth will not scale...
"Today, clustering technology is still very primitive"
Really? Then explain how this cluster of sixteen 4-ways achieved 1.2 million TPM-C almost three years ago.
What happened to my list of companies selling Conroes?
Please stay in topic, we are talking about servers in this article.
A fair 32GB pricing comparison would be a Dell Woodcrest server versus the Sun x4100 or x4200. Oops, can't do it, the Sun servers only go up to 2GB DIMMs, so they can only hold 16GB. The Sun servers are stuck with dead-end memory.
"A 8P server is more powerful than sixteen 2P servers combined."
Data, please? Maybe what you meant is an 8P server uses more electrical power than 16 2P servers combined.
Really? Then explain how this cluster of sixteen 4-ways achieved 1.2 million TPM-C almost three years ago.
That was because Oracle supports clustering (and you end up paying 16x Oracle license fees). Try something else, such as MySQL or Apache.
In general, it's much better to have one large server than a bunch of small servers, as long as the price is not too steep. What I see today is that Opteron has made 4P(8 way) and 8P(16 way) very affordable, such that it is no longer economical to buy small 2P servers.
AMD is a platform company and Intel is a CPU company with onboard wireless. An unfortunate happenstance.
Regardless of the price now, the fact that these are Socket 1207 Opteron64 compatible automatically puts them far ahead of Woodcrest. I doubt Woodcrest can throw in Clovertown on the same chipset, and even if it can, you got 8 cores fighting over 2 FSB's and you also have cache coherency happening for the 8 cores over the 2 FSB's, so no matter what you do, you simply cannot make a Woodcrest system perform like a Sun x4600.
Hey, I love Sun technology. Most of the time. It's got some of that gee-whiz 'pure science' appeal. But mostly, it is designed to cost way too frakkin much and is way too frakkin complex and expensive to manage.
Before I begin on Sun's stupidity, let me just put it out there... I know their channel-centric business model is a giant ball and chain for them. But it has been their choice to keep this model. They had a chance to ditch it after the dotcom bust, but decided to keep it. This flawed decision is a big reason why Sun is not doing nearly as well as they could have been doing.
This channel model results in convoluted discounting schemes, lags in response to the market, higher prices on the average, and much lower volumes. Not to mention almost total alienation for new customers. It is a killer. And Sun is just plain 'ol retarded for keeping it.
Another thing to get out of the way: yes, Sun's prices may compare reasonably to other rapacious companies like HP. Most HP computers cost twice as much as an equivalent Dell. And Itanic is from another planet. It shouldn't even be brought up. If you understand elasticity of demand, you will see why Itanic costs so much.... there is no demand. Although I will admit Dell's latest Full-Bendover DIMM (FB-DIMM) compatible machines are god-awful expen$ive.
But back to Sun:
What Sun has no clue about is VALUE.
For example, I will not pay Sun thousands of dollars for a commodity 250GB/500GB disk drive. Jonathan can go frak himself if he thinks anyone, enterprise or ISV, is happy paying Sun 10X-20X the price of a frakkin disk drive.
The 'I' in RAID is for INEXPENSIVE. But Sun just does not get this.
The new "Thumper" is something that should have been left out in the weeds with the rabbit food.
Who wants a machine that has 48 giant heat generators in such a small space? It must be designed this way just because Sun wants to do their own version of Full Bendover with the customer being forced to buy more Sun disk drive replacements.
Even better. Who in their right mind will buy a frakkin storage server without drives that hot-swap without pulling the entire machine out??? I mean, that is so frakkin stupid.
I look at the machines at I say:
x4500 "thumper" -- interesting logical design, dumb physical design, WAY OVERPRICED.
x4600 "big head" -- a nice capable server, just WAY OVERPRICED.
x8000 "blademan" -- a very good design, WAY OVERPRICED.
x2100 "poorboy" -- WAY TOO CRIPPLED and WAY OVERPRICED.
x4100 "fatboyslim" -- WAY TOO OVERPRICED for what you get.
x4200 "rounder" -- the most solid Sun server design, but WAY TOO OVERPRICED.
What the world is looking for is x86 *commodity pricing*. Not frakking x86 *SPARC-style pricing*. Sun's pricing model should be way lower.
To become a real company of the future, and GROW, Sun needs to:
1. Fire a lot of people and reduce their burn rate.
2. Design systems for lower cost.
3. Reduce prices.
4. Do something about those "big-80's" style warranties. I'd rather have an entire spare machine vs. relying on a warranty. And with Sun's prices, it is easy to buy a spare. This means there is a problem with the pricing.
One of the truths about technology is that when you have good technology and you overprice it, you are only compromising your future vs. building a future for yourself.
Sun has great technology and is killing themselves by pricing themselves out of the market. They are pouring all their opportunities down the drain.
The new machines from Sun are interesting 'pure science' tech that is neato for the lab but incredibly disappointing in the real world.
"That was because Oracle supports clustering"
So, what's your point? How does this address your comment that "unless Intel can rewrite the OS and other software..."?
"Try something else, such as MySQL"
Are you saying that MySQL doesn't support clustering?
Are you saying that MySQL doesn't support clustering?
As I understand, MySQL clustering only works on NDB, a memory based database, which is small and expensive.
Generally, to build a cluster that can outperform a higher SMP, you end up spending more money(hardware, software licenses) and more labor (software config, sys admin, etc)
x4500 "thumper" -- interesting logical design, dumb physical design, WAY OVERPRICED.
x4600 "big head" -- a nice capable server, just WAY OVERPRICED.
Try build one yourself and see how much they cost.
"A 8P server is more powerful than sixteen 2P servers combined."
Data, please? Maybe what you meant is an 8P server uses more electrical power than 16 2P servers combined.
You cannot compare a 8P server with a combination of 16 2P servers. The latency between the 16 2P servers is all that is needed to kill performance even with 10Gbe connections and second, the software has to be drastically rewritten if that is at all possible. Only certain applications will work on a cluster of 16 2P servers and they are designed that way too.
"Suppose your average file length is 1GB (such as those huge Yahoo mail mbox files), and your application needs to search within the file, the x4600 can do it in 0.25 second on average."
Sharikou, unless you have insider information, I do not believe that Yahoo uses the dumb mbox format. Yahoo runs on qmail and most likely they use the maildir format to store emails. Please find some other example.
Try build one yourself and see how much they cost.
Okay, I knew I you would eat the rat poison.
Let's see how much 48 250GB drives cost. About $4300. A big customer such as Sun would pay less, probably around $3000-$3500.
And a 2P Opteron system with software RAID. Maybe $5K.
Fancy case $500.
So that big server should run < $10K.
But Sun's price is $33K.
That is frakkin $23K+ margin on a <$10K box. Without support and all sorts of other pure cocaine profit stuff Sun is going to tack on to any deal.
And you know what... if I add a 24-port HARDWARE RAID card to that box I built, it runs me $1100.
So I am still $22K ahead of Sun and I have hardware RAID.
And guess what, Sun-lover-ikou?
That hardware RAID supports heavy duty data protection like RAID6.
What does your steaming crap pile of ZFS software RAID support?
"RAID 0, 1, 0+1, 5 enabled by RAID-Z"
Right from the x4500 spec page. That's it. Basic entry level RAID available on a cheap Windoze PC. But Sun will make you pay through the nose for it.
If Sun wants to be innovative, then they have to do the frakkin work to make storage cheap for people, not put 48 drives in a super heavy box.
And decide that they can offer real hardware RAID. For $1100/24 ports, there is just no reason to be running software RAID.
Sun is retarded. The market was not crying out for a bunch of overheating overvibrating drives crammed into a *170 LB* drawer that doesn't hot swap from the front.
I like Sun, I really do. But they *do not* get the real world.
You cannot compare a 8P server with a combination of 16 2P servers. The latency between the 16 2P servers is all that is needed to kill performance even with 10Gbe connections and second, the software has to be drastically rewritten if that is at all possible. Only certain applications will work on a cluster of 16 2P servers and they are designed that way too.
Most *enterprises* will get a lot more VALUE out of 16 2P servers. At the very least that is 8 2-box clusters. And that is very useful for database, web, app server, etc.
One box is still one box. A single point of failure.
Yahoo runs on qmail and most likely they use the maildir format to store emails. Please find some other example.
I have a Yahoo mail box with thousands of email messages, I noticed that when I search something, the whole server seems to be frozen. So I asked a friend who used to work at Yahoo. He told me Yahoo's search of mail was just a perl script regex match the whole mbox file. I believe Yahoo didn't use Maildir because it will run out of inodes on the filesystem.
Most *enterprises* will get a lot more VALUE out of 16 2P servers. At the very least that is 8 2-box clusters. And that is very useful for database, web, app server, etc.
I already pointed out that to cluster many small boxes, you end up spending more. For instance, there is no cheap way to build a clustered MySQL database. Even for tasks such as mass virtual hosting, one reliable SMP enterprise server is definitely better than a bunch of small boxes. When you have 16 cores in a server, you have a lot of room to handle burst traffic. This is like 1 T1 line can serve a 100 people at very good speed, while 20 dialup connections suck for 20 people.
I have a Yahoo mail box with thousands of email messages, I noticed that when I search something, the whole server seems to be frozen. So I asked a friend who used to work at Yahoo. He told me Yahoo's search of mail was just a perl script regex match the whole mbox file. I believe Yahoo didn't use Maildir because it will run out of inodes on the filesystem.
Whether you run out of inodes depends on the filesystem. Amazing Yahoo! can manage with the mbox format...thanks.
These are low-end commodity software servers you're pimping out, here. It's humorous you think someone needs an 8P machine to handle RAID5.
" I doubt Woodcrest can throw in Clovertown on the same chipset, and even if it can, you got 8 cores fighting over 2 FSB's and you also have cache coherency happening for the 8 cores over the 2 FSB's, so no matter what you do, you simply cannot make a Woodcrest system perform like a Sun x4600."
Intel has said that Cloverton is drop-in compatible with Woodcrest, and people have already dropped in Kentsfields to replace Conroe so it shouldn't be a problem.
It may be a technicality, but with the shared cache your talking about 4 caches on 2 FSBs and only 4 caches to be kept coherent instead of 8.
"As I understand, MySQL clustering only works on NDB, a memory based database, which is small and expensive.
Generally, to build a cluster that can outperform a higher SMP, you end up spending more money(hardware, software licenses) and more labor (software config, sys admin, etc)"
You know at database that is too big to fit in memory needs a faster disk system. And multiple machines will give you more spindles more easily than building some giant behemoth single system.
Load balancing a large database across a cluster... is what every major large SQL vendor does for performance.
That is also the premise -- with proven results -- of Oracle's grid-based database architecture.
An array of cheaper computers will trump one big box.
For MySQL < 5.1, you can use in-memory tables and cluster and you get redundancy. No one single point of failure.
MySQL >= 5.1 supports disk-based tables for clustering:
"In MySQL 5.1, the memory-only requirement of MySQL Cluster is removed and operational data may now be accessed both on disk and memory. A DBA can specify that table data can reside on disk, in memory, or a combination of main memory and disk (although a single table can only be assigned to either disk or main memory). Disk-based support includes new storage structures - tablespaces – that are used to logically house table data on disk. In addition, new memory caches are in place to manage the transfer of data stored in tablespaces to memory for fast access to repeatedly referenced information."
Last, if you need help setting up a MySQL cluster, I suggest you follow this simple guide.
Clustering, especially with MySQL 5.1 or newer, is simple and powerful. It gives most enterprises much more value than one big server.
And remember, Sharikou, with external HT connectors, multi-machine clustering with HT will be available soon. This will give you blinding speed and the reliability/scalability advantages of multi-machine.
Henry, maybe you should make a server/workstation company, and you might just out-sell not only Sun but also HP, IBM, NEC, ...
IMHO, however, your way of estimate 'cost' & 'performance' are way too naive - typical for an enthusiast, though. ;)
Let's see how much 48 250GB drives cost. About $4300. A big customer such as Sun would pay less, probably around $3000-$3500.
And a 2P Opteron system with software RAID. Maybe $5K.
Fancy case $500.
Like you can build such a thing. First find a 4U case that can fit 48 drives please before you come up with a figure.
And you know what... if I add a 24-port HARDWARE RAID card to that box I built, it runs me $1100.
You mean two such cards. Now please find a 4U case that holds 48 drives, 2 full length PCI-X cards and a motherboard that provides two PCI-X slots, sockets for two Opterons and slots for 32GB of RAM that is STABLE.
So I am still $22K ahead of Sun and I have hardware RAID.
In your dreams.
And guess what, Sun-lover-ikou?
That hardware RAID supports heavy duty data protection like RAID6.
What does your steaming crap pile of ZFS software RAID support?
"RAID 0, 1, 0+1, 5 enabled by RAID-Z"
Right from the x4500 spec page. That's it. Basic entry level RAID available on a cheap Windoze PC. But Sun will make you pay through the nose for it.
I dunno. I wonder if 1Gb of RAM for that Areca 24-port RAID card is enough for the 24 drives it is handling. If it is not, it is going to be SO FRIGGING SLOW. Do you know why certain hardware RAID cards are slow? It is because they are not suitable for running RAID5. Heavy duty data protection like RAID6? Nice description. However, all RAID6 is, it can survive the loss of TWO disks while RAID5 survives the loss of one disk. That is better...but hardly that much when you are talking about 24 drives. You better damn well ensure that 1) your dumb case ALLOWS proper cooling and 2) your POWER SUPPLY(IES) better be up to pushing 48 drives otherwise more than just two drives are going to be failing and all that fancy hardware raid6 don't mean squat.
If Sun wants to be innovative, then they have to do the frakkin work to make storage cheap for people, not put 48 drives in a super heavy box.
Obviously you don't know what you are talking about.
And decide that they can offer real hardware RAID. For $1100/24 ports, there is just no reason to be running software RAID.
Plenty of people choose software raid over hardware raid. For good reasons like performance and maintainability.
Sun is retarded. The market was not crying out for a bunch of overheating overvibrating drives crammed into a *170 LB* drawer that doesn't hot swap from the front.
I like Sun, I really do. But they *do not* get the real world.
You rate your knowledge and wisdom too high. Sun has come up with a cheap and beautiful solution because 1) they have the engineering knowhow (if it is I/O, Sun is about the best in providing such solutions, Tyan and Supermicro will never match a Sun motherboard...not yet anyway) and 2) AMD made it possible with their platform.
Most *enterprises* will get a lot more VALUE out of 16 2P servers. At the very least that is 8 2-box clusters. And that is very useful for database, web, app server, etc.
One box is still one box. A single point of failure.
Well, you have not considered space and power consumption issues needed for powering a single box versus 16 boxes and the cooling they require. As to whether most enterprises will get a lot more value out of 16 2P servers, that is an open question that you are not fit to answer because 1) you are not the enterprise in question and 2) they never had that option before but they do now.
SunFire X4600’s eight-way AMD Opteron,. Now talk about virtualization supreme and gigantic savings. Orwell’s “big Brother” would be impressed with this machine packed into a 4U case. Two & Four way systems will remain the market sweet spot, but big jobs call for big machines.
One issue with the x4600
The RAID controller is limited to RAID1 or RAID0, so with four disks, you can either mirror two and two or create a stripe of all four, but you can’t run RAID5 across all the disks, which is unfortunate.
I like the SUN products but have to agree that they are %10 to %15 overpriced, as a shareholder of SUN I would recommend SUN lower the price for the sake of volume sales.
You mean two such cards. Now please find a 4U case that holds 48 drives, 2 full length PCI-X cards and a motherboard that provides two PCI-X slots, sockets for two Opterons and slots for 32GB of RAM that is STABLE.
As I said, no one was crying out for such a retarded solution. Why put all that in one 4U box? Does an enterprise usually run out budget for racks? No. It's those things that go in the racks that cost the big bucks. Like overpriced gee-whiz science fair servers from Sun.
Another way to get a better deal would be to buy a 4P motherboard (many available from Supermicro, Tyan, IWill, etc) and there's an easy 32GB RAM. Remember it is easier and faster to scale RAM up on AMD with more physical processors.
Instead of buying a bunch of drives, you could buy a NetApp server for storage and still be ahead of Sun's pricing.
Let's see... that gives a screaming 4P server and rock-solid storage from the #1 company in the storage industry. Hmmmm.
Or you could spend more on a brand new system with an early adopter software RAID system. And deal with 48 drives worth of heat and vibration in a 170LB 4U box.
This x4500 is going to be a flop. It does not solve any problem that exists today.
There is no market for giant software RAID hard-to-swap drawer-servers. Sun will find this out once the "gee-whiz" hype fest is over.
Instead of buying a bunch of drives, you could buy a NetApp server for storage and still be ahead of Sun's pricing.
The difference between NetApp's stuff and x4600 is the differen between Xeon+special software and Opteron with Direct Connect Architecture and Solaris 10 ZFS. With x4600, 10% of the Opteron computing power is used for pumping data and 90% can be used for other purposes. With NetApp, 100% of the Xeon CPU cycles are needed to keep pounding the Intel FSB for bandwidth allocation.
The only advantage x4500 has is packaging - the rest should be prooved
- power consumption
- performance
- availability
The x4600 is nice box, but that would kill SPARC up to 32-way. Why Sun hasn't published anything beyond SPEC numbers ? SAP SD, SPECjapp, other tests. I beleive the results are way better than SPARC numbers.
Let me make some predictions here
- SPARC will be sold to Fujitsu
- x64 will be bought by Dell
- Sun ends up with Solaris and java
As I said, no one was crying out for such a retarded solution. Why put all that in one 4U box? Does an enterprise usually run out budget for racks? No. It's those things that go in the racks that cost the big bucks. Like overpriced gee-whiz science fair servers from Sun.
How do you know? One such box could replace RACKS of file servers, their frontend servers and it comes with substantial processing power to boot. With 32GB of RAM and speedy processors, it could easily become the basis to replace a entire mailstore cluster. Add another one and QFS and there you have redundancy. At say 6000 USD per year per rack replaced, say 4 racks of not so old hardware (say 4 or more years old), you can get a ROI in 2 and a half years time without factoring the increase in storage and in performance from the X4500.
Another way to get a better deal would be to buy a 4P motherboard (many available from Supermicro, Tyan, IWill, etc) and there's an easy 32GB RAM. Remember it is easier and faster to scale RAM up on AMD with more physical processors.
Why would I buy a whole bunch of 4P boxes if I am going to spread the storage out? A cluster of 4P boxes != a Sun X4500. Just the latency between boxes rules it out and that is not considering how one is going to get 48 drives spread across boxes to look like one single volume.
Or you could spend more on a brand new system with an early adopter software RAID system. And deal with 48 drives worth of heat and vibration in a 170LB 4U box.
Wow, I had so many production servers running on Linux software raid I do not have a problem with the idea. What is the problem? The only ones I had were disk failures, not Linux software raid bugs or crashes. 48 drives worth of heat and vibration? Ha! So? I have seen a simple whitebox dual Xeon server with just 6 disks suffer disk instability due to insufficient cooling. The X4500 however, has 10 heavy duty fans in the front and gaps a good few millimetres wide between all the disks and I bet with some rubberized supports to reduce vibration. Please do not make a fool of yourself by thinking that Sun engineers would not have thought of the most obvious problems.
You are full of hot wind dissing real enigeers.
Sun is about the best in providing such solutions, Tyan and Supermicro will never match a Sun motherboard...not yet anyway
Ha, while perhaps your post is mostly correct, it is very likely that this new Sun system is in fact using the Iwill H8502 motherboard:
http://www.iwill.com.cn/product_2.asp?p_id=107&sp=Y
Just compare the description of the Sun's 8P system, and of H8502, and you can see how much similarity is there.
Richard... I'd hang low for a while, your pricing and cluster analysis is a bit !@#$#@!
Sharikou is right! What he is saying (at least I think), is that your better off buying a single machine than a cluster! Generally speaking, it is more economical and performs WAY better.
Now that doesn't mean that companies won't buy clusters to meet their special needs.
But with time, as server vendors cram more into less, clusters will no longer be needed. Causing companies like Oracle to stop (or slow) R&D investments in clustered software.
"Dude, four 2P low end servers are four 2P low end servers. They don't add up to 8P. A 8P server is more powerful than sixteen 2P servers combined."
I don't think you know enough to say that for certain. Show some benchmarks and maybe we will believe you. However, you don't have any credibility left so you need to back up what you say with benchmarks.
Sheik Kalif
"AMD is a platform company and Intel is a CPU company with onboard wireless. An unfortunate happenstance."
This is just diarrhea of the keyboard. Intel has been doing platforms much longer than AMD. Intel mobo + Intel CPU + Intel Gb Enet + TCP offload + Intel compiler optimization + Intel wireless + Intel flash + Intel .... well you get the idea makes a platform. AMD has to cobble the same stuff together from third parties to make a "platform".
Sharikou is right! What he is saying (at least I think), is that your better off buying a single machine than a cluster! Generally speaking, it is more economical and performs WAY better.
Yes. When people buy systems, they consider the SwaP factors, space, price, performance, power. Let's forget about space and power and only look at price and performance.
Now, look at the application you need to run and decide on what kind of hardware you want to have.
1) One big box, simple to manage, no pain, no hassle, load your app, and it runs great and you have plenty of CPU and memory to handle bursts. As long as you can afford it, go for it.
2) A bunch of small boxes -- in 99% of the cases, you end up paying more and getting less. The reason is simple: your app is not written for cluster and cluster interconnect is 100 times slower than what's inside an SMP.
People may use Google as a success story for cluster of cheap PCes. But, google literally wrote its own OS and filesystem for that, even though search is not transactional -- there is no correlation between two searches. Today, we are seeing Google moving to 4P Opteron servers, why aren't google using a zillion of Geodes running at 0.9 watt each?
What we see today with AMD64 is a very fundamental shift. With Opterons, we can have scalable 16 way computing at very affordable price points, unlike the old days when you have to pau 1 million dollars -- which you can use to buy 100 2P boxes and hire a bunch of programmers to hack your code.
Going forward, we will be 4P or even 8P becoming the standard. The x4600 is the first single board 8P server and it's just the start. One x4600 with 16 Opteron 885 cores is much better than 32 2P woodcrest machines.
Or, you can look at it this way:
People pay $1 million for a 16P Itanium Superdome, enough money to buy 200 2P woodcrest machines. Why?
Are people stupid? No. It is because that a $1 mil 16P Superdome is faster than 200 $5K 2P Woodcrest, and cheaper in the long run.
Now, I tell you this: A SUN x4600 at $70K is faster than a 16P Superdome.
"Intel mobo + Intel CPU + Intel Gb Enet + TCP offload + Intel compiler optimization + Intel wireless + Intel flash + Intel .... "
Hmm... if I didn't know any better, this sounds an awwwwwwful lot like a monopoly to me...
Pushing out all the mobos , the Network, flash, etc.
Wait... they have done that. Is this why the Industry hasn't moved (well, moved at Intel's dictated pace) since they squeeze out every single ounce of profit from all their partners?
Ever wonder why Intel Gross Margin is ~50%? Dell ~ 12%? And everyone else in the LOOOOOOW single digits (WITH Intel Rebate Kickback)??
That's right... monopoly. And history has taught us Monopolies halt all progress. (See Rockefeller)
Sorry for off topic...
I think Sun is going in the right direction, and doing something that's the same plain vanilla servers everyone is chugging out. :)
I'm just curious but how many of you guys actually can personally relate to this hardware?
Very few and the very few are the only ones who have posted information which is correct or relevant for that matter.
Can we just stop with the assumptions, if you aren't in the industry don't post useless facts.
Here is something you can easily put together today. I put together two with 400GB Hitachi drives a year and a half ago:
1 AIC 8U 42 drive enclosure $3789
http://www.aicipc.com/productDetail_galley.asp?catID=6&id=187
1 Tyan 4882 Quad socket mobo $1000
4 Opteron 885's $2086x4 = $8344
16 2GB 400MHz Reg DIMMS = $6160
4 raid cards $2156
42 750GB Seagate SATAII's = $15000
Thats about $36,500 for 31.5TB of space, 32GB RAM plus 8 cores.
AIC has a 5U 48-drive vertical mount case as well:
http://www.aicipc.com/productDetail_galley.asp?catID=5&id=173
Performance wise the Sun 4500 board, if I understand correctly, has two PCI HT tunnels hanging off each cpu,
that gives it 8 direct PCI-X buses
(plus it looks it has 2 more PCI-X buses that don't directly connect to a cpu rather another tunnel)
The Tyan board only has 1 PCI tunnel
or 2 PCI-X buses.
Both of the boxes I have
have been rock solid, no drive failures. I'm using 3ware 9000 series cards in these and I can say
linux software raid-5 is much faster than the hardware raid-5 in these cards have so I've been using linux software raid-5. Its very flexible and features keep being added to the linux kernel.
I'm going to make another one as soon as a quad socket F motherboards are available.
"Hmm... if I didn't know any better, this sounds an awwwwwwful lot like a monopoly to me..."
Do you know the economics of Chip manufacturing? How can you compare operating margins Dell with Intel?
Chip mfg is very diferrent than assembling parts.
Intel ivests $6B a year to create an $250B industry that the whole ecosystem survives upon. If you were in the semicon industry, you would have lost your job long time back, had it not been for Intel. The brightest people across the globe flock to intel for work. AMD is still a street urchin as compared to Intel.
I can say
linux software raid-5 is much faster than the hardware raid-5 in these cards
This is my experience too. I was benchmarking the servers I built with 3ware 8 port RAID card and a software RAID configuration, and found the software raid faster. Running database benchmarks, I didn't see any advantage with hardware RAID.
For performance, I think the SUN x4500 has more balanced I/O. But your custom box has more computing power. One thing I was wondering is cabling in your custom box, were you using those bundled cables (one with many SATA links)?
"One thing I was wondering is cabling in your custom box, were you using those bundled cables (one with many SATA links)?"
Yes, theyre actually infiniband cables. The case has 10 4-drive sata backplanes and one infiniband cable connects to each backplane and a 3ware multilane card. The two system drives in the rear of the case use normal sata cables to the motherboard.
Big(Medium)Iron vs Clusters
The fact is the Opteron platform can probably cover it all in the future.
One can uses low wattage Server blades to get the benefits of a cluster namely scale as needed
and
New beef-up Opteron systems like the ones from Sun or even Fabric7
can definitely challenge some Unix, Itanium systems.
Our company got on the Linux bandwagon in an effort to convert as many flavors of UNIX - HP-UX, AIX, True64, to Linux as possible. Recently we had to start supporting Solaris on AMD64 - i.e. adding more UNIX flavors. Out customers demand stable high performance system that can run our complex software 24/7. Guess what? SUN boxes SPARC or x86 - just do that and are actually cheaper long-term solution than anything else. We tried a lot of options. Dell and HP x86 just could not cut it. Oracle 10g grid did not work as well as it should have. Our Customers have been getting T1000, T2000, V490, V890 and E29000, x4200 machines and will be extremely happy to get some of these new AMD64 based SUN boxes. I am not a big Solaris fan but SUN did an extremely good job optimizing the OS for the Opteron and it shows. Our primary dev platform is now on Solaris 10 AMD64 that replaced a similar machine with RHEL4 AMD64. I think the rumors of SUN's demise are way more than premature.
"Going forward, we will be 4P or even 8P becoming the standard. The x4600 is the first single board 8P server and it's just the start. One x4600 with 16 Opteron 885 cores is much better than 32 2P woodcrest machines."
x4600 actually has 8 CPU boards. It is hardly "single board".
However, the x4600's modular design is a step in the right direction.
Sun just needs to sell it at half the price.
Clustering, especially with MySQL 5.1 or newer, is simple and powerful. It gives most enterprises much more value than one big server.
And remember, Sharikou, with external HT connectors, multi-machine clustering with HT will be available soon.
Thanks. MySQL has definitely made a lot of progress. I asked a MySQL guy whether I could get NDB working, say, by creating large swap files, and the answer then was NO, it had to be physical memory. I forgot about the architecture, but I think the queries were done in a central node.
Still, I think that one big box is much better if cost is comparable and bandwidth is not an issue. 10GbE can't match HT.
As I wrote before, HT3.0 will change the whole clustering game. Instead of running a cluster of many OS instances and many memory spaces, using HT3.0, you connect a bunch of machines together and they become a larger SMP box with one copy of OS and one memory space. So instead of having two x4600 8P 16way machines and deal with clustering headache and loss of performance, you get one 16P 32 way machine, even though the two boxes are sitting 3 meters away.
60K isn't overpriced; it's a bargain. Why? Because barring technical problems we haven't foreseen, we'll be able to collapse a lot more than just 15 of our current PowerEdge servers onto it.
Yes. Andy Bechtolsheim said the new x4600 and 8000 blade system are for data center consolidation. One x4600 is intended for replacing 20 Xeon servers.
Second, the X4600 is lacking in storage: a max of 4 drives, and no hardware RAID 5
That's why there is a x4500 with 24TB of storage. Trust me, raid-z with ZFS is 100 times better than RAID5. Raid-Z is far more reliabile than RAID5.
Read this:
http://www.internetnews.com/ent-news/article.php/3619396
"...the Tokyo Institute of Technology is already using fifty of the X4500s to, among other things, work with large media files in parallel."
Tim O'Reilly, the CEO of O'Reilly Media, was impressed enough with the X4500 to say, "This is the Web 2.0 server." O'Reilly's company coined the term "Web 2.0"
"In addition to claiming twice the I/O and longevity of the competition, Sun said the Blade 8000 has up to 40 percent lower power requirements."
Sun claimed several world record benchmark specs for the new systems.
Both of the boxes I have
have been rock solid, no drive failures. I'm using 3ware 9000 series cards in these and I can say
linux software raid-5 is much faster than the hardware raid-5 in these cards have so I've been using linux software raid-5. Its very flexible and features keep being added to the linux kernel.
How much RAM do you have on your 3ware card? There is a point when hardware RAID cards with enough processing power become slower than software raid and that is when the onboard RAM buffer has become too small. So 1GB of RAM for a 24-port RAID card with 24 500GB or 750GB drives attached makes me laugh if someone is going to claim that the raid card will perform better than software raid at RAID5. It will perform better...if it is lightly loaded and if it is not in degraded mode.
Please comment on Anand's latest declaration "the Birth of the New King".
http://www.anandtech.com/IT/showdoc.aspx?i=2793
Please comment on Anand's latest declaration "the Birth of the New King".
I wonder if Intel paid $5000 for that "King" word.
As you can see from this article, the Real King is the Galaxy x4600, which smashes the $1 mil 16P HP Superdome at 5% the cost.
uhh... Does the term "hardware partitioning mean anything to you ???"
I'm just curious but how many of you guys actually can personally relate to this hardware?
My company has been benchmarking OpenLDAP in large deployments for a couple of customers. Using SGI Altixes, handling databases of over a terabyte in size (150 million directory entries) on systems with 32 Itaniums and a terabyte of RAM. Thus far, these are among the largest and fastest LDAP installations on the planet. (Hitting around 20k queries/second on 150 million entries.)
As Sharikou said, a farm of 2P servers simply cannot execute this workload. In this context, even these new Sun servers are puny. I'm still waiting for HT 3.0 so we can test massively large single-system-image Opterons at this scale.
Clusters are OK when your workload is trivially parallelizable, or when you can tolerate the latency imposed by internode communication. Database lookups themselves don't have a lot of interthread dependencies, but there's no getting around the need to have direct access to the entire indices, or suffer response latencies from having to chain the lookups around, or constantly thrash a tiny cache trying to page things in and out.
Kind of off topic...
Since so many over here have experience with software RAID under Linux and swear by it, I thought I'll this question here.
Can the Linux kernel do nested RAID like RAID100 and RAID50?
ref.
RAID100
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#RAID_100_.28RAID_10.2B0.29
RAID50
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#RAID_50_.28RAID_5.2B0.29
Another off-topic question...
Intel still relies on the outdated FSB architecture. Since cache coherency on an Intel "dual-core" chip would have to go over the FSB, providing a unified cache solves that problem (while creating the new problem of cache-trashing/thrashing... which one is it?).
Could it be that Intel chose to put a unified cache for both cores on a die instead of discrete caches to partially overcome the problem of cache coherency over a bandwidth-starved FSB? (Does not apply to systems with 2 or more sockets)
Can this unified cache explain atleast part of the performance boost that exists in Conroe over the "dual-core" Netburst design?
Also, what would happen to the performance of Conroe compared to "dual-core" Netburst when the dataset is larger than the cache or when two different threads have working datasets that sum to be greater than the cache on Conroe?
I want to compare Conroe to "dual-core" Netburst because Intel will be producing more of the latter for atleast the next 2 (if not 3) quarters. Intel may be doing this because Conroe may not perform as good as "dual-core" Netburst does in higher systems which everybody will soon find out and Intel won't have a solution for some time and so it is keeping "dual-core" Netburst for the time being.
Shared caches are the right way to integrate multicores. IBM Power processors have done this for years. So does Sun's Niagara (8 cores).
Cache thrashing should be pretty pathological. Not saying it can't happen, but notice that Conroe has 4MB, twice as much as Opteron's 2x1MB. One Conroe core would have to "overpower" the other 3:1 to push it into parity with Opteron's 1MB.
More typically, there's a benefit in a shared cache as the code is stored just once and used by both processors. This means there's more room for data, as compared to a 2x2MB
Doesn't IBM's Power and Sun's SPARC have integrated memory controllers just like AMD and unlike Intel? Wouldn't that make a difference?
Can the Linux kernel do nested RAID like RAID100 and RAID50?
I am not sure if this is what you mean but I can create two mirrors from 4 disks and then stripe those mirrors. md0 and md1 are the first two mirror arrays and then a md2 will be the stripped array of md0+md1.
if you had eight disks, you can do the same with md3, md4, md5 and then create a md6 which is a stripe array of md2 + md5.
So yeah, Linux software raid can be nested.
Thanks.
That was exactly what i wanted to know.
By the way, the array you described was RAID100 (if you didn't already know that).
A related question.
If this array gets heavily loaded, how will RAID throughput performance be affected? Will it be better to get RAID controllers to do some of the RAID and the linux kernel do the rest? E.g., the kernel does the first level of stripping and the controller does the next level of stripping and mirroring.
My concern is that the system bus may get too crowded and I will have I/O problems.
Also, how good is the linux kernel at rebuilding broken arrays, i.e., when a disk in the array fails?
"Doesn't IBM's Power and Sun's SPARC have integrated memory controllers just like AMD and unlike Intel? Wouldn't that make a difference?"
Yes, they have integrated DDR memory controllers. Yes, it makes a difference. The question is with respect to what? Since you seem to be more interested in "cache miss ratio", the answer would be 'no, IMC does not make a difference wrt miss ratio'. If you're wondering about "cache miss latency", it's obvious IMC provides lower latency.
Performance depends on the product of miss "ratio" and "latency" (how often and how long). Intel compensates for higher latency with lower miss ratio (bigger caches). It can also afford higher core frequency because the MC is in a separate chipset. Obviously, the tradeoff is different from application to application. Hence, the endless debate...
Also, notice that IMC with DDR buses is not a scalable solution. A DDR bus uses a lot of pins and is not very efficient. This is what FB-DIMM is trying to address (about 6x higher bandwidth per pin), but at the cost of slightly higher latency. It's an inherent tradeoff with serial vs parallel interfaces (higher bandwidth but higher latency).
Post a Comment
<< Home