Dual Core or Quad Core?
Christopher Chisholm
christopher.chisholm at syamsoftware.com
Fri Jun 29 11:58:24 EDT 2007
Derek Atkins wrote:
> "Tom Buskey" <tom at buskey.name> writes:
>
>
>> A few points:
>>
>> The Macintosh community had debates in the past about SMP vs single.
>> Generally they think a dual 500 MHz is roughly like a single 700MHz. From
>> that subjective information, I'd say more cores that are slightly slower are
>> better.
>>
>
> This is probably true, as each core can be working on a separate
> process so you have less context switching.
>
>
>> I've felt that dual CPUs have lower latency when multitasking. The OS runs on
>> one CPU, software raid (why spend more for a dedicated hardware raid card?),
>> your App on another, etc. IMHO latency is more important then throughput for
>> interactive use.
>>
>
> This is probably related to fewer context switches, but keep in mind
> the memory bandwidth.
>
>
>> I've been looking at a VMware ESX server. it's licensed per 2 CPUs. A 4 core
>> is the same as a single or dual core in their licensing. I'm finding with
>> that, a dual quad core is cheaper then adding ram + 1 cpu to 2 systems with 3
>> single core cpus between them.
>>
>> Those 1.6GHz CPUs might use less power & generate less heat.
>>
>
> "might" being the key operative word here. Check the specs.
>
>
>> The real limit on your application will likely be I/O. Bus speed (FSB),
>> network, disk speed, memory speed, etc. How much data are they moving around?
>> More RAM will help more then CPU GHz also.
>>
>
> Keep in mind the memory bus issues. In particular looking at Intel vs
> AMD Quad-cores, the Intel quads are effectively two Dual-cores in a
> single package and they share a memory controller, whereas the AMD
> quads will theoretically each have a memory controller. What this
> means is that you get higher memory throughput (and lower latency) on
> AMDs than Intels. I just don't know which applications this effects.
>
I've always liked the AMD architecture because of their "hypertransport
bus", which is basically a fancy way of saying that certain things a
dedicated bus. Intel's architecture (unless something has recently
changed) still has everything going through the front side bus. AMD's
processors have a memory controller integrated on each processor itself,
along with a dedicated bus to the memory it uses. So, you may need more
sticks of ram, but in theory the bus architecture is highly optimized
(the memory bus won't be affected by what's going on with the network,
HDDs, etc).
For a single-user environment, it seems like benchmarks more or less
prove that the different isn't huge, but i could see how with 50 users
each doing their own thing AMD's approach may work better. That's
purely a (somewhat) educated guess, it might not be true.
As a side note, i know xeon heatsinks are screwed into the motherboard
for a nice solid connection, but all the other intel chips use what i
think is the worst idea ever conceived for a heatsink clip. AMD's
heatsink fastening system feels so solid when you clamp it down, whereas
all the non-xeon heatsinks are kind of screwed in with cheap plastic
gadgets that everyone seems to have problems with. This doesn't really
pertain to this issue since we're talking xeon, but i've never quite
been able to get over that... :-)
> -derek
>
-chris
More information about the gnhlug-discuss
mailing list