At last, I have a decent state-of-the-art PC computer for Blender.

Basically, here are the specs (http://www.ldlc.com/fiche/PB00127585.html):

  • Second generation Quad-Core Intel Core i7 (Intel Core i7-3820 - 3.6/3.9 GHz)
  • Motherboard Asus P9X79 (Intel X79 Express)
  • 16 GB DDR3-SDRAM PC3-12800 (1600 MHz)
  • SSD 120 GB Serial ATA 6Gb/s
  • 1 To Serial ATA 6Gb/s 7200Trs/min
  • NVIDIA GeForce GTX 660Ti 2 GB Graphic Card
  • Noctua NH-D14 SE2011 Fan
  • Audio 8 channels HD with THX TruStudio support
  • 650W LDLC Quality Select TA-650 80PLUS Bronze power
  • Blu-ray reader/ DVD writer Super Multi DL Silent medium tower Corsair Obsidian 550D

LD0001171752_2.jpg

Of course, it runs Linux Mint 14 MATE with nvidia proprietary drivers and CUDA toolkit.


My tests was to render one of my previous doodles, using Cycles (128 samples, 50% of 1920x1080 resolution).


04-modelisation-exemple_spin.png

It was originally rendered on a small Macbook Air 13" of Mid-2011 in 2min35s.


With this new computer running Linux Mint 14 on its Quad-Core Intel Core i7 threaded (e.g; 8 cores), it was rendered in 0min48s with tiles 64x64.


Finally but not least, using the CUDA Toolikt, the render last only 0min25s (twice as less!) with Tiles 64x64 and 0min17s with tiles 256x256 (almost thrice as less!).

I really wonder how I could use Cycles before on a dual-core not GPU computing enabled :-)