What's new
  • Visit Rebornbuddy
  • Visit Panda Profiles
  • Visit LLamamMagic
  • Visit Resources
  • Visit Downloads
  • Visit Portal

AMD FX-8150 vs Intel i7-3820: Which can run more VM Bots/ Non-VM bots?

Amount of the lack of information about "modern CPUs" are overwhelming.
I'm assuming few engineers here, and they stopped reading after the "Computer Organization, 3rd ed by Henessey and Patterson."
In order to understand what's going on with the modern CPUs, you need to actually read up papers from ISSCC or ACM Computer.
I'll clear up few misinformed information here.

* Instructions per Cycle (IPC) as a performance matrix is flawed nowadays:
- Instructions per cycle, IPC, measures the instruction level parallelism (ILP). There are Thread-level parallelism and more!
- IPC gives you an idea of depth of a pipeline and ILP.
- ILP versus performance saturates because of many reasons, mainly sharing of the resources in Cache Fetches and Execution (limited BUS, cache coherency, and decode complexity)
- e.g. Low IPC (Core processors - ~12 stage pipeline) is better than High IPC (Pentium 4 - 20+ stage pipeline)

* Why is a change in Socket design necessary?
- Most people love to keep their CPUs using same Socket design, but here's the reason why they have to change.
- Improved Voltages (Lower voltage / thresholds) and Termination gives a better performance / watt, faster switching, etc.
- Change in BUS (DDR3 -> Wide IO or more channels require more data BUS and more control signals)
- Change in CPU Design => Change in Thermal Dissipation (HotSpots) => Change in module placement & routing => Change in pin assignment

People, please feel free to ask about a CPU and let's not assume anything.
Iisten to this guy he knows what he is talking about :P

Now all you internet experts on CPU listen to him where he says AMD are 12-18mths BEHIND intel
 
So what you have is a situation where the video driver sees the process running your VM (vmware-vmx.exe) access the video card, then when you launch another wow inside the VM on the host all the driver see's is the same VM process asking to use the video card again, because its the same process asking for the video card twice perhaps its thinking its the same game, or that the first call to the video card has ended/crashed and starting a fresh one or just not being able to handle running two separate sessions on the video card from the same VM container process.

Not hard to imagine that there could be some confusion happening and normally confusing something on a computer quite often makes it crash
 
So what you have is a situation where the video driver sees the process running your VM (vmware-vmx.exe) access the video card, then when you launch another wow inside the VM on the host all the driver see's is the same VM process asking to use the video card again, because its the same process asking for the video card twice perhaps its thinking its the same game, or that the first call to the video card has ended/crashed and starting a fresh one or just not being able to handle running two separate sessions on the video card from the same VM container process.

Not hard to imagine that there could be some confusion happening and normally confusing something on a computer quite often makes it crash

aww zeewolf check your PM's :)
 
Thats down to driver stablility, you often see nvidia and amd release updates that have specific fixes for certain games. If your running the game inside a VM then it likely wont recognise the game and the fixes wont be used. Running games inside of VMs is a bit of a niche that I suspect the video card manufacturers wont have spent much time testing their drivers with

I just bricked my 3D apps in VMware instances after upgrading from NVidia 296.10 to 301.42.
 
I have a 3820 system using VM's with each running 2 D3's. System has 32GB of RAM has Linux host system. Processor is air cooled with Phanteks ph-tc14pe and running at 38 ratio and 1.25 block multiplier resulting 4.75GHz.
I have SSD (Crucial M4's) that have 2 VM each. Currently I run 8 VM (16 D3's) with 85% utilization.

My D3's have smallest possible configs that can be choosed from game + prefs.ini

Notice that I'm not using DB on all as I missed upgrading them when injuction begin, but utilization is about the same woth both bots (the other isn't that notorious bot but a little simpler home cooked one)
 
Quoting from [Solved] Amd fx 8150 overclocked - CPUs - CPU-Components

techpops 02-14-2012 at 04:17:31 PM said:
I was just putting together a list for myself might help you get a better picture as far as multithreading goes. I compiled these from watching videos of benchmarks so I could be sure it was real, but there has to be some wiggle room considering all the benchmarks are done on completely different systems. So take them as approximations (although cinebench isn't really affected hardly at all by things like memory speeds so the scores are useful I think). Where possible I looked at many benchmarks for the same CPU at the same speed and created an average score from those, but many I could only find one test. Only showing CPU's that interested me, so its not a complete list by any means.

Cinebench 11.5 Multithreaded test

8 threads
Intel i5 2500k 3.3GHz - 05.93
Intel i5 2500k 4.5GHz - 06.99
Intel i5 2500k 4.7GHz - 07.35
Intel i5 2500k 5.0GHz - 07.90

8 threads
Intel i7 2600k 3.4GHz - 06.46
Intel i7 2600k 4.0GHz - 07.81
Intel i7 2600k 4.5GHz - 08.57
Intel i7 2600k 5.0GHz - 09.58

8 threads
Intel i7 2700k 3.5GHz - 07.51
Intel i7 2700k 5.0GHz - 09.67

8 threads
Intel i7 3820K 3.6GHz - 07.40
Intel i7 3820K 4.6GHz - 08.98

12 threads
Intel i7 3930K 3.2GHz - 10.14
Intel i7 3930K 4.5GHz - 13.06
Intel i7 3930K 4.8GHz - 13.79

12 threads
Intel i7 3960K 3.3GHz - 10.50
Intel i7 3960K 4.6GHz - 13.42

4 threads
AMD 960T 3.0GHz - 03.42
6 threads (2 unlocked)
AMD 960T 4.1GHz - 07.32

8 threads
AMD FX 8120 3.1GHz - 04.96
AMD FX 8120 3.5GHz - 05.48
AMD FX 8120 3.7GHz - 05.57
AMD FX 8120 4.2GHz - 05.83

6 threads
AMD X6 1100T 3.3GHz - 05.85
AMD X6 1100T 4.2GHz - 07.25

8 threads
AMD FX 8150 3.6GHz - 05.98
AMD FX 8150 4.4GHz - 07.02

The first result is always the stock speed, the highest results may or may not be suitable for 24/7. You'd have to decide for yourself. BTW Cinebench is the only benchmark I use as it's based on what I work on all day, Cinema4D.



HTH
 
mephuser any idea how the i7-3770k would perform in those multithreads?
 
I'd be interested to see what the 8150 does at 5ghz which is easily doable with an h100.
 
Iisten to this guy he knows what he is talking about :P

Now all you internet experts on CPU listen to him where he says AMD are 12-18mths BEHIND intel

No No no.
1. Only the foundry services (from Immersion Lithography/Etching/MOS tech/...) are 12-18 months behind Intel.
+ this is due to Intel's head start in immersion litho. for finer resolution, better OPC, research in UV equipment, and most importantly 3D transistor.
+ Simply, Intel has the patents and money to invest in them. ($ = research)
2. Design and other etc., H/W architecture is uncomparable, due to the significant difference.
3. Design Methodology of AMD is ahead of Intel, IMO.

When it comes to next generation stuff...
+ Through-silicon-Via, Wide-IO caches they are researching similarly (jointly with Samsung and Hynix). PS Vita SoC is 1st gen TSV, but next-gen CPUs will soon have these. (Instead of 8MB cache, imagine asymmetic 8Gbit cache)
+ However, Intel does have certain edge, because Haswell will come with Hardware Transactional Model (atomic transactions for parallel programming done by H/W)
 
I'd be interested to see what the 8150 does at 5ghz which is easily doable with an h100.

H100 is very expensive, why get an AMD for cheaper prices only to waste all that saved money on H100?? It is huge and loud as well. Most cases won't be able to fit it in.
 

Don't use Cinebench benchmark for CPU performance unless you are intending to use on-board GPU.
There are multiple benchmarks because they test different capabilities of the CPU.

In general, Cinebench benchmark checks for 2 things. CPU performance and GPU performance.
The cinebench benchmark renders a graphic and allows the CPU to perform rendering.
Which S/W vertex processing. In computer terminology SIMD performance (Single Instruction Multiple Data).

This is when application does single instruction with huge amounts of Data.
Instead of { Operation, Value, Value }
It does Huge Data Operation such as { Operation, Value, Value, Value, ..., Value }

Mainly what it is testing is scientific computing capability / Encoding / Decoding.
Intel does this through SSE (extension). However since the design choice was that SIMD will be done by GPUs more so than CPUs, it was not considered a big asset in CPU design budget.
Of course this changed with Ivy Bridge, where Graphic Core had significantly more role in CPU, sharing last level caches (LLC) and all.

Anyways, Cinebench is a great benchmark for SIMD processing. However, games utilize these operations through GPU, rather than CPU.

tldr: cinebench is simd benchmark
 
Isn't it better to buy some weaker and energy saving computers instead of buying one super expensive computer?
 
Back
Top