Here's a Dell story you don't see too often

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Hi Yousuf Khan,

> Dell driven out of a market by low-cost competition.
>
> http://biz.yahoo.com/rc/040816/tech_china_dell_1.html?printer=1

How can Dell compete upon price-performance in these markets when they
don't sell CPUs that provide better price-performance and features?

Dell only sells PCs equipped with Intel CPUs, an arrangement not
expected to change in the near term, Amelio said. Lenovo, Hewlett
Packard and China's No. 2 PC seller, Founder Group, have all recently
introduced models in China powered by AMD chips.

Thankfully Intel's got an astonishing marketing machine in Western
countries. Check out these objective truths:
<http://www.infoworld.com/article/04/08/13/33TCworkstation_1.html>

Intel's Xeon-based workstations are much faster than workstations based
on AMD's Opteron when it comes to heavy multitasking

<http://www.infoworld.com/infoworld/article/04/08/13/33TCworkstation-sb_1.html>

Despite a great deal of hype, AMD's 2.2GHz Opteron 248 CPU -- as
embodied in the IBM IntelliStation A Pro workstation -- doesn't fare
well under heavy workloads.

...

In fact, across the range of tests, the Opteron system took an average
of 15 percent longer to complete the tasks than the Xeon.

The Opterons are "in fact CPU-bound and running out of processor
bandwidth." They can't even keep up with last generation Xeons. "The story
gets worse for AMD when you factor in the newest Xeon processors from
Intel."

Infoworld's bottom line:
"... with heavy processing, the 2.4GHz Opterons show their limitations and
the A Pro starts to crawl." They're no match for 3.2GHz Xeons which are
"the performance king."

The benchmark methodology and paucity of information appears to preclude
anyone reproducing the results.

Regards,
Adam
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Adam Warner <usenet@consulting.net.nz> wrote:
> Hi Yousuf Khan,
>
>> Dell driven out of a market by low-cost competition.
>>
>> http://biz.yahoo.com/rc/040816/tech_china_dell_1.html?printer=1
>
> How can Dell compete upon price-performance in these markets when they
> don't sell CPUs that provide better price-performance and features?

It seems Intel doesn't have enough money to market to the entire Chinese
market properly like it does in the Western world. Thus it's processors are
at a disadvantage, simply based on price.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 16 Aug 2004 19:24:03 +1200, Adam Warner
<usenet@consulting.net.nz> wrote:
>Hi Yousuf Khan,
>
>> Dell driven out of a market by low-cost competition.
>>
>> http://biz.yahoo.com/rc/040816/tech_china_dell_1.html?printer=1
>
>How can Dell compete upon price-performance in these markets when they
>don't sell CPUs that provide better price-performance and features?

The CPU has almost nothing to do with the price. The key phrase from
the article is right here:

"Sellers have cut prices to as little as 3,000 yuan ($362) per unit by
offering models without Microsoft's Windows operating system"

That is where the price difference is coming from. Windows is the
ONLY expensive component in a modern low-end computer. The cost of a
WinXP Home Edition license roughly $100. The cost of service and
support is another $100+. The cost of ALL the hardware comes up to
under $200 for a low-end system, and most of that is tied up in the
hard drive and motherboard.

When Dell buys Intel Celeron chips they are paying damn near nothing
for them. Maybe $35 or $40. AMD might be able to sell their chips
for $30 or $35, shaving a few percent off the top, but even in China
and other developing markets that isn't going to make a huge
difference. But cutting $100 off the top by dropping WinXP from the
price definitely will make a huge difference.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> In any case, the point of the InfoWorld article was that the Xeon
> workstations excelled on mixed workloads...the kind an actual
> workstation user _might_ experience...different for different kinds of
> users to be sure, but a better measure of workstation performance
> than a database benchmark.
>
> Intel hypes hyperthreading every chance it gets because it's something
> Intel's got that AMD doesn't. There's been much online discussion
> among people who could be expected to be knowledgeable, and the best
> conclusion I can draw about SMT is that, as a design strategy, it's a
> wash...if you consider performance per watt or performance per
> transistor. That leaves open the question of responsiveness. Anybody
> who uses a workstation and does CPU-intensive work has had the
> experience of having the system become annoyingly slow. Does
> hyperthreading help with _that_? The InfoWorld article suggests that
> it does, and a database benchmark doesn't seem particularly relevant.

Actually the problem with the Infoworld article is that it's not even really
a true test of multitasking performance. If you read the article, and then
do some checking up on the tools used, it's very shady. First of all, the
benchmarking application is described on the company's website here:

http://analyzer.csaresearch.com/

It's actually called *HTP* Analyzer (i.e. Hyperthreading Analyzer). So it's
a benchmark specifically designed for and geared towards Hyperthreading.
Therefore it's aware of how to detect it, and how to make full use of it. If
you read through the description of this benchmarker a little bit, you'll
find there are two major components of this benchmark suite. First
component, it states that it can test real-world applications through a
test-script functionality; and second, it tests the system's multitasking
efficiency by running simultaneous background workloads. So you think that
since it runs real-world apps in a test-script, therefore it must be one of
those good applications benchmarks and not one of those bad synthetic
benchmarks. However, then you read about what it uses to load down the
background tasks with. According to its webpage, it creates "simulations" of
real-world workloads such as Database, Workflow, and Multimedia. Now these
aren't real database, workflow or multimedia applications, just simulations
of them -- so they are synthetic workloads. He's not running multiple
simultaneous real-world applications; he's running only one real-world app
thread, but several synthetic app threads to load it down. It's a synthetic
benchmark cleverly masquerading as an applications benchmark.

Now, how could this benefit a Hyperthreading processor over a non-HT one?
Well, in an HT CPU, the benchmark can configure it such that it runs the
applications test-script in the primary HT logical processor, while all of
the synthetic load-generating simulations are executed in the secondary
logical processor. Windows would multitask the applications test script in
the run queue of one logical processor where nothing else would be running,
while the synthetics would contend amongst themselves for attention in the
secondary logical processor. In a non-HT CPU, all of the tasks (whether real
or synthetic) would contend for timeslices within the same Windows' run
queue.

So given three simulated workloads and one real application load, when you
put the real application in its own logical processor, what you've
effectively done is given the application test-script a 3:1 priority
advantage over the synthetic workload simulations. In a non-HT CPU, all of
the threads go into the same Windows run queue, and they all get equal
priority according to the default task scheduling behaviour. Only the
real-world app test-script's elapsed time is ever recorded; the results of
the
simulated workloads are never measured and discarded, since they are only
there to add a simulated workload and therefore they are disposable.

Now, is this a good measure of a multitasking workload? Only if you consider
a proper use of multitasking to be running one real-world app in the
foreground while disposable workload simulators bog it down in the
background.

Okay those were just the technical faults about this benchmark. There's also
some conspiracy theory stuff here. One of the co-authors of this article,
Randall C. Kennedy, happens to be the designer of this benchmark:

http://www.csaresearch.com/about.asp

Mr. Kennedy was once an employee of Intel, according to the above biography:

"Later, as a contract testing and development engineer for Intel
Corporation, he led the effort to create tools and resources to articulate
the company's performance initiatives surround high-end desktops (Constant
Computing) and Gigabit Ethernet networking."

Which sounds like he worked in the benchmarketing department.

Furthermore, this guy is some sort of long-time crusader for Hyperthreading.
He's written articles favouring Hyperthreading for a long time now, this one
from about two years ago:

http://www.networkcomputing.com/1324/1324buzz2.html

Nothing wrong with being a crusader for the technology and showing to world
an example of an application that really benefits from Hyperthreading, just
so long as you don't try to pass that off as a benchmark.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Tony Hill <hilla_nospam_20@yahoo.ca> wrote:
> The CPU has almost nothing to do with the price. The key phrase from
> the article is right here:
>
> "Sellers have cut prices to as little as 3,000 yuan ($362) per unit by
> offering models without Microsoft's Windows operating system"
>
> That is where the price difference is coming from. Windows is the
> ONLY expensive component in a modern low-end computer. The cost of a
> WinXP Home Edition license roughly $100. The cost of service and
> support is another $100+. The cost of ALL the hardware comes up to
> under $200 for a low-end system, and most of that is tied up in the
> hard drive and motherboard.

Dell sells some systems in the US for around $399, so why is $362 such an
unreachable price point in China? Those systems in the US most likely have
Windows installed on them too. Microsoft gives OEMs such as Dell a break on
prices for prepackaged systems.

> When Dell buys Intel Celeron chips they are paying damn near nothing
> for them. Maybe $35 or $40. AMD might be able to sell their chips
> for $30 or $35, shaving a few percent off the top, but even in China
> and other developing markets that isn't going to make a huge
> difference. But cutting $100 off the top by dropping WinXP from the
> price definitely will make a huge difference.

It's likely that AMD is able to offer those low prices for the highest
performance Sempron 2800+ or higher, whereas Intel can only offer those
prices on Celeron 2.2Ghz or lower. Mhz marketing then misfires for Intel.
The Celerons that would match up against those Semprons would cost much more
to make for Intel, since Intel would actually have to increase the real-life
clock frequency, whereas AMD only has to dick around with the clock
frequency slightly and assign a huge new Quantispeed number.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:CEhUc.682$E7T1.226@news04.bloor.is.net.cable.rogers.com...
> Tony Hill <hilla_nospam_20@yahoo.ca> wrote:
> > The CPU has almost nothing to do with the price. The key phrase from
> > the article is right here:
> >
> > "Sellers have cut prices to as little as 3,000 yuan ($362) per unit by
> > offering models without Microsoft's Windows operating system"

Here's another article that basically puts the blame for Intel's (and
therefore Dell's) uncompetiveness squarely on the shoulders of Intel, from
the following article:

http://www.chinadaily.com.cn/english/doc/2004-08/17/content_366242.htm

<quote>
Lenovo, earlier this month, launched a much-cheaper consumer PC series,
using CPUs (central processing units) made by AMD.
Analysts widely believe the low-price strategy, aimed at tapping the
township and rural markets, will help Lenovo increase its market share.

Insiders said Lenovo had asked Intel, without success, to provide low-price
CPUs for its new PC series.

Tapping China's township and rural markets is a natural choice, as the
penetration of PCs in big cities has reached 60-70 per cent, Yang said.

"If our partner cannot give us support, we will surely choose another," Yang
said.
</quote>

Both Lenovo (largest) and Founder (2nd largest) are doing business with AMD,
after years of being Intel loyalists. It looks like the price war in China
is serious stuff and it cannot be influenced by advertising anymore, just
price.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 17 Aug 2004 07:39:25 GMT, Robert Myers
<rmyers1400@comcast.net> wrote:

>Who are the readers of Infoworld? Whether the data reflect reality or
>not, I'm sure they've got data to show that their readers are serious
>prospective enterprise buyers.

Back in the old days, Infoworld was partly a pretty hard-tech
publication, with video card and MB comparisons and such. Some time
back, maybe a decade ago, they started focusing more on the
"enterprise computing" arena, eschewing the nuts'n'bolts for
high-level coverage. Eventually, this became their entire focus.

I believe their current target readership is more along the line of
middle-to-upper IT managers, with lots less emphasis on technical
integrity and more emphasis on systems, support, and marketing trends,
but I haven't read them much for the last 5 years.


--
Neil Maxwell - I don't speak for my employer
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 17 Aug 2004 07:09:21 GMT, "Yousuf Khan" <bbbl67@ezrs.com>
wrote:

>Here's another article that basically puts the blame for Intel's (and
>therefore Dell's) uncompetiveness squarely on the shoulders of Intel, from
>the following article:
>
>http://www.chinadaily.com.cn/english/doc/2004-08/17/content_366242.htm

I haven't seen any articles with any data regarding what the
configurations of these competing systems are. That is, which AMD
chips are Lenovo putting in their bottom-end boxes vs. which Intel
chips are Dell putting in their China-targetted bottom-tier PCs? This
would tell you how much of a price impact the actual CPU has on the
final system. Of course, even $20 is a fair bit of margin on a $360
PC.

The article did note that Lenovo has been losing money, and is
attempting to narrow losses by focusing on core businesses. It sounds
to me like they're emphasizing market share over profits, which is a
strategy Dell has never been too fond of. Dell's China growth
estimate is still pretty hefty, and they may have decided to focus on
the middle range where there are still some profits to be had. Time
will tell which is the right approach for the China market.


--
Neil Maxwell - I don't speak for my employer
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote:
> Adam Warner wrote:
> I wonder who the readers of Anandtech really are.

Apparently, many of them are writers for Infoworld. :)

The aforementioned Randall C. Kennedy, the co-author of the Hyperthreading
benchmark in Infoworld can be found wondering around the forums at
Anandtech.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Yousuf Khan wrote:

> Robert Myers wrote:
>

<snip>

>
> So given three simulated workloads and one real application load, when you
> put the real application in its own logical processor, what you've
> effectively done is given the application test-script a 3:1 priority
> advantage over the synthetic workload simulations. In a non-HT CPU, all of
> the threads go into the same Windows run queue, and they all get equal
> priority according to the default task scheduling behaviour. Only the
> real-world app test-script's elapsed time is ever recorded; the results of
> the
> simulated workloads are never measured and discarded, since they are only
> there to add a simulated workload and therefore they are disposable.
>
> Now, is this a good measure of a multitasking workload? Only if you consider
> a proper use of multitasking to be running one real-world app in the
> foreground while disposable workload simulators bog it down in the
> background.

Your key claim (I believe) is that the benchmark software is a
subterfuge by way of giving scheduling attention to the jobs on the
hyperthreaded system but not on the Opteron system. That's an
interesting theory, and it may well be correct, but your analysis rests
on assumptions about the actual benchmark and about scheduling behavior
that I don't know how to check.

One can always, at least in theory, arrange job priorities so that
background jobs interfere minimally with foreground jobs. Without any
constraint on how the background jobs are hog-tied, you could probably
get any result you wanted...if indeed you are fiddling with scheduling
priorities.

> Okay those were just the technical faults about this benchmark. There's also
> some conspiracy theory stuff here. One of the co-authors of this article,
> Randall C. Kennedy, happens to be the designer of this benchmark:
>
> http://www.csaresearch.com/about.asp
>
> Mr. Kennedy was once an employee of Intel, according to the above biography:
>
> "Later, as a contract testing and development engineer for Intel
> Corporation, he led the effort to create tools and resources to articulate
> the company's performance initiatives surround high-end desktops (Constant
> Computing) and Gigabit Ethernet networking."
>
> Which sounds like he worked in the benchmarketing department.
>
> Furthermore, this guy is some sort of long-time crusader for Hyperthreading.
> He's written articles favouring Hyperthreading for a long time now, this one
> from about two years ago:
>
> http://www.networkcomputing.com/1324/1324buzz2.html
>
> Nothing wrong with being a crusader for the technology and showing to world
> an example of an application that really benefits from Hyperthreading, just
> so long as you don't try to pass that off as a benchmark.

"Benchmark" is a pretty broad term. The manufacturer benchmarks that
are published in places like specbench.org, tpc.org, and
http://www.cs.virginia.edu/stream site aren't perfect, but at least they
put hardware on a common footing and the rules are spelled out in detail
for all to see and to complain about. Manufacturers are free to do
whatever they want, so long as they don't break the rules. That leaves
alot of room for creativity, and people get pretty creative.

As to everything else, a benchmark tests the hardware, the software, the
compiler, and the care, insight, skill, and impartiality of whoever is
performing the benchmark. That's alot of unknowns, no matter what you
call the result.

csaresearch.com has a skewed view of things resulting from a desire to
sell advertising? The "Seeing double?" stuff right on the web page you
linked to is probably a better clue than Randall Kennedy's c.v.

Someone is influenced by his "strong recommendations" despite an
apparent conflict of interest? Caveat emptor.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Hi Tony Hill,

>>> Dell driven out of a market by low-cost competition.
>>>
>>> http://biz.yahoo.com/rc/040816/tech_china_dell_1.html?printer=1
>>
>>How can Dell compete upon price-performance in these markets when they
>>don't sell CPUs that provide better price-performance and features?
>
> The CPU has almost nothing to do with the price. The key phrase from
> the article is right here:
>
> "Sellers have cut prices to as little as 3,000 yuan ($362) per unit by
> offering models without Microsoft's Windows operating system"
>
> That is where the price difference is coming from. Windows is the ONLY
> expensive component in a modern low-end computer. The cost of a WinXP
> Home Edition license roughly $100. The cost of service and support is
> another $100+. The cost of ALL the hardware comes up to under $200 for
> a low-end system, and most of that is tied up in the hard drive and
> motherboard.
>
> When Dell buys Intel Celeron chips they are paying damn near nothing for
> them. Maybe $35 or $40. AMD might be able to sell their chips for $30
> or $35, shaving a few percent off the top, but even in China and other
> developing markets that isn't going to make a huge difference. But
> cutting $100 off the top by dropping WinXP from the price definitely
> will make a huge difference.

You make a great point, thanks Tony. But why would a savvy consumer choose
an Intel _Celeron_ over most AMD CPU choices? Doesn't Dell need to hope
that Intel's marketing is so strong in China that consumers will choose
the Intel brand even if computers are priced the same? If Dell cannot rely
upon this perception it cannot compete. Period. Even if it starts selling
"naked PCs". What happens if 64-bit computing becomes a checklist point?
Or gamers find out that an AMD Athlon64 3000+ beat a P4 3.2GHz _Extreme
Edition_ running Doom 3?

Intel has to provide Dell with suitable price:performance options so it
can compete effectively. Whether this is already hurting Dell is debatable.

Regards,
Adam
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote:
>> Now, is this a good measure of a multitasking workload? Only if you
>> consider a proper use of multitasking to be running one real-world
>> app in the foreground while disposable workload simulators bog it
>> down in the background.
>
> Your key claim (I believe) is that the benchmark software is a
> subterfuge by way of giving scheduling attention to the jobs on the
> hyperthreaded system but not on the Opteron system. That's an
> interesting theory, and it may well be correct, but your analysis
> rests on assumptions about the actual benchmark and about scheduling
> behavior that I don't know how to check.

To play my own devil's advocate, I'll list what we do know about the
benchmark, and what we are conjecturing. We _know_ that the benchmark is
Hyperthreading aware, we know that it runs one real-world application
thread, and multiple synthetic load-generating threads, and that the
synthetic threads are disposable (i.e. their results are not saved or
measured). What we are _conjecturing_ is that the benchmark is using its
Hyperthreading awareness to create an unfair multitasking priority advantage
for the benchmarked application -- we don't know this for sure; for all we
know, this benchmark doesn't make use of any of its Hyperthreading knowledge
(i.e. complete innocence), to create an unfair testing situation.

The conjecture is based upon the fact that it's easy to detect
Hyperthreading and to optimize for it. Detecting Hyperthreading can be done
completely in user-space, it doesn't require any privileged instructions,
simply a couple of CPUID instructions and you're done. During bootup, Intel
has specified that all physical processors will be enumerated first, and all
virtual processors will be enumerated last. So it's easy to figure out which
processors are real and which ones are virtual. Most OS'es have some kind of
functionality to allow applications to specify which processors they want
their threads to start up on.

Since this was a dual-processor vs. dual-processor shootout, the non-HT CPU
will appear simply as two CPUs, whereas the HT CPU will appear as 4 CPUs.
CPUID will tell you automatically how many are real and how many are virtual
and which ones they are.

> One can always, at least in theory, arrange job priorities so that
> background jobs interfere minimally with foreground jobs. Without any
> constraint on how the background jobs are hog-tied, you could probably
> get any result you wanted...if indeed you are fiddling with scheduling
> priorities.

Yeah, obviously they didn't want to appear to be fiddling with Windows' own
scheduling priorities that would be too obviously unfair, so they worked
around Windows' scheduling priorities with the HT loophole. Since each
logical processor appears to have its own separate run queue in Windows,
they didn't actually modify any of the run queue priorities, they just
distributed the workloads strategically, putting their most important
threads on less busy logical processors. That way they can claim that all of
the individual run queues were unchanged, which is true, but they have twice
as many run queues to choose from.

In an actual multitasking environment, with real work being done both in the
foreground and background, the applications will get distributed out to the
run queues in a roundrobin-fashion. Therefore even with twice the run
queues, an HT processor will have more or less evenly loaded run queues, no
different than the case on a non-HT processor.

> csaresearch.com has a skewed view of things resulting from a desire to
> sell advertising? The "Seeing double?" stuff right on the web page
> you linked to is probably a better clue than Randall Kennedy's c.v.

Perhaps, it is a better clue. But I thought the fact he himself says he
worked for an Intel marketing department was also a pretty good clue. :)

> Someone is influenced by his "strong recommendations" despite an
> apparent conflict of interest? Caveat emptor.

It's hard to say how much people are going to be influenced by this, since
this article barely published any of the benchmarks that they said they ran.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Neil Maxwell <neil.maxwell@intel.com> wrote:
> I haven't seen any articles with any data regarding what the
> configurations of these competing systems are. That is, which AMD
> chips are Lenovo putting in their bottom-end boxes vs. which Intel
> chips are Dell putting in their China-targetted bottom-tier PCs? This
> would tell you how much of a price impact the actual CPU has on the
> final system. Of course, even $20 is a fair bit of margin on a $360
> PC.

I thought they mentioned that they were using Semprons here. I could be
wrong, a lot of other news stories floating around, can't keep them all
straight in my head.

> The article did note that Lenovo has been losing money, and is
> attempting to narrow losses by focusing on core businesses. It sounds
> to me like they're emphasizing market share over profits, which is a
> strategy Dell has never been too fond of. Dell's China growth
> estimate is still pretty hefty, and they may have decided to focus on
> the middle range where there are still some profits to be had. Time
> will tell which is the right approach for the China market.

I don't think Dell has had any problems with using loss-leader economics in
the past. Bring people in with products that are so cheap that they lose
money on them, and hopefully they'll buy some other things that will make up
for the loss.

I doubt that the Lenovo model is any different than that. Of course, it
might have gotten to the point in China that all products are now
loss-leaders (meaning companies are now producing losses overall). But Dell
is remaining in the higher-end Chinese markets, like business PCs, etc. So
if Dell is hanging around for those markets, then perhaps profits are still
to be had there. That means that probably the locals, Lenovo and Founder,
are probably still making profits in those markets too.

The locals want to sell loss-leaders so that they can establish a product
identity with their customers for the future. If they buy a cheap PC today,
they'll buy an expensive PC tomorrow.

Yousuf Khan
 

Ed

Distinguished
Apr 1, 2004
1,253
0
19,280
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 17 Aug 2004 22:08:53 GMT, "Yousuf Khan" <bbbl67@ezrs.com> wrote:


>The locals want to sell loss-leaders so that they can establish a product
>identity with their customers for the future. If they buy a cheap PC today,
>they'll buy an expensive PC tomorrow.

and that expensive PC will probably have a different brand name inside
and out (Dell/Intel). ;p

Ed

>
> Yousuf Khan
>
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Yousuf, many thanks for the analysis. Since the Xeon workstations were
admitted slower to begin with it was extraordinary that they ended up
having faster throughput while remaining highly responsive (this is
usually a tradeoff). If the total work done is never recorded the paradox
is easily resolved ("Only the real-world app test-script's elapsed time is
ever recorded; the results of the simulated workloads are never measured
and discarded, since they are only there to add a simulated workload and
therefore they are disposable.")

Since you raised the link between the reviewer and the benchmarks suite
I've come across this Anandtech forum thread:
`First "real" Nocona vs. Opteron review?'
<http://forums.anandtech.com/messageview.cfm?catid=28&threadid=1348215>

Randall C. Kennedy starts by simply claiming: "Opteron is really good at
doing a few things at once. Saturate the CPU, however, and it tanks." In a
subsequent message he writes: "I meant vs. Xeon. Under complex workloads,
Xeon - especially the new Nocona-based model - stomps all over Opteron."

He makes a strong recommendation:

07/31/2004 11:10 AM

Typical. Your reaction to a poor showing by your CPU of preference is
to dismiss the test as being irrelevant. A bit pathological, don't you
think?

Unfortunately, in my position I don't have the luxury of becoming
emotionally attached to products. My customers - who are primarily in
the financial services sector - have zero tolerance for delays. Time
is literally money for these people, and my workloads model their
runtime environment (which is a huge target market for workstation
vendors).

Bottom Line: I'm strongly recommending that my customers avoid
Opteron-based workstations for demanding, multi-process, multi-tasking
workloads, and I'm echoing these sentiments in my InfoWorld Test Center
contributions on the subject.

RCK

-------------------------
Director, CSA Research
http://www.csaresearch.com

Regards,
Adam
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Adam Warner <usenet@consulting.net.nz> wrote:
> Since you raised the link between the reviewer and the benchmarks
> suite I've come across this Anandtech forum thread:
> `First "real" Nocona vs. Opteron review?'
> http://forums.anandtech.com/messageview.cfm?catid=28&threadid=1348215

Yeah, I know about those remarks of his in Anandtech too. He posted them
almost a month or two before this Infoworld article came out.

It was enough for me to join Anandtech's forums and post a message asking
him to explain his benchmark methodologies. So far I haven't received any
response from him. But likely he might not be following the thread right
now.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

> http://analyzer.csaresearch.com/

Error: Incompatible Browser Detected

We're Sorry! This Performance Portal site requires Microsoft Internet
Explorer 5.0 or later (IE 5.5 or later recommended). You can obtain the
latest version of IE from the Microsoft Internet Explorer web site.

Note: For a complete list of system requirements, please see our
Performance Portal product information page.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Adam Warner wrote:
>>http://analyzer.csaresearch.com/
>
>
> Error: Incompatible Browser Detected
>
> We're Sorry! This Performance Portal site requires Microsoft Internet
> Explorer 5.0 or later (IE 5.5 or later recommended). You can obtain the
> latest version of IE from the Microsoft Internet Explorer web site.
>
> Note: For a complete list of system requirements, please see our
> Performance Portal product information page.

If they can't even do a decent job with their web site,
how the heck is anybody supposed to believe they have
successfully tackled the more difficult job of creating
a valid benchmarking app ?
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 17 Aug 2004 10:15:28 -0600, Rob Stow <rob.stow@sasktel.net>
wrote:

>Adam Warner wrote:
>>>http://analyzer.csaresearch.com/
>> Error: Incompatible Browser Detected

>If they can't even do a decent job with their web site,
>how the heck is anybody supposed to believe they have
>successfully tackled the more difficult job of creating
>a valid benchmarking app ?

Might not be their fault... they used FrontPage... a software company
used to deliberately make their apps break when ran on a rival
operating system back in the DOS days you know :ppPP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

The little lost angel wrote:

> On Tue, 17 Aug 2004 10:15:28 -0600, Rob Stow <rob.stow@sasktel.net>
> wrote:
>
>
>>Adam Warner wrote:
>>
>>>>http://analyzer.csaresearch.com/
>>>
>>> Error: Incompatible Browser Detected
>
>
>>If they can't even do a decent job with their web site,
>>how the heck is anybody supposed to believe they have
>>successfully tackled the more difficult job of creating
>>a valid benchmarking app ?
>
>
> Might not be their fault... they used FrontPage... a software company
> used to deliberately make their apps break when ran on a rival
> operating system back in the DOS days you know :ppPP
>

They're dumb enough to use FrontPage but we are
supposed to trust them to be smart enough to make
a valid benchmarking app ? "Does not compute."
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Hi Yousuf Khan,

> Now, is this a good measure of a multitasking workload? Only if you
> consider a proper use of multitasking to be running one real-world app
> in the foreground while disposable workload simulators bog it down in
> the background.

If the amount of work done in the background is never taken into account
then the technique is grossly misleading. Here's how the testing technique
could be improved:

1. Measure the amount of work completed by the Xeon workstation in the
simulated workloads.

2. On the Opteron workstation reduce the priority on the simulated
workloads until the Opteron only completes as much work in the simulated
workloads as the Xeon.

3. Compare the responsiveness and throughput of the foreground real-world
application while each workstation is approximately completing _the same
amount of background work_.

Everyone who multitasks cares about how much work is being done in the
background.

Regards,
Adam
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Adam Warner <usenet@consulting.net.nz> wrote:
> If the amount of work done in the background is never taken into
> account then the technique is grossly misleading. Here's how the
> testing technique could be improved:
>
> 1. Measure the amount of work completed by the Xeon workstation in the
> simulated workloads.
>
> 2. On the Opteron workstation reduce the priority on the simulated
> workloads until the Opteron only completes as much work in the
> simulated workloads as the Xeon.

This is quite possible to do in the Opteron using just Windows task
switching mechanisms. Raise the priority of the foreground process, while
reducing the priorities of the disposable workloads.

> 3. Compare the responsiveness and throughput of the foreground
> real-world application while each workstation is approximately
> completing _the same amount of background work_.
>
> Everyone who multitasks cares about how much work is being done in the
> background.

Yes, exactly, if you're multitasking in the background, then chances are
that the programs running in the background are just as important to you as
those in the foreground. In both cases, you're trying to get some useful
work done, otherwise you wouldn't be running the secondary processes.

If this guy had only made the benchmark have the ability to run a second
test-script with a real-world application running in there too, then measure
its completion time, it would be a real worthwhile application. It would be
useful to know how fast it could run all tasks it is running, not just one
task.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 17 Aug 2004 11:11:07 -0600, Rob Stow <rob.stow@sasktel.net>
wrote:
>They're dumb enough to use FrontPage but we are
>supposed to trust them to be smart enough to make
>a valid benchmarking app ? "Does not compute."

Well, even though I personally am biased against people who uses
FrontPage, a lot of people does simply because they don't know any
better or can't be bothered to learn anything else. Don't forget,
often the web developer isn't a permanent staff of the company, it's
usually a contract job. So there may be no relation between the
capabilities of the web designer and the company itself. :)



--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Ed wrote:
> On Tue, 17 Aug 2004 22:08:53 GMT, "Yousuf Khan" <bbbl67@ezrs.com>
> wrote:
>> The locals want to sell loss-leaders so that they can establish a
>> product identity with their customers for the future. If they buy a
>> cheap PC today, they'll buy an expensive PC tomorrow.
>
> and that expensive PC will probably have a different brand name inside
> and out (Dell/Intel). ;p

Maybe, that future PC might have an Intel inside it, but I don't think it'll
be a Dell. The locals are trying to build brand loyalty to themselves with
these loss-leaders.

Anyways, the really high-end PCs are now Athlon 64-based. Pentium 4 PCs are
now mid-range.

Yousuf Khan