Announcement

Collapse
No announcement yet.

Building an editing computer for Resolve 16

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • gskguston
    replied
    Hey Everyone,

    So, As I said before I would post here when I had completed the build. I really thank everyone here for giving me feedback on this build. I had to make decisions based on my small-scale situation and limited budget. But I did incorporate a lot of the feedback that was given.

    Here is the Build:

    NZXT H510
    Gigabyte Designare z390
    I9 9990k 8 core
    64gigs RAM 3600
    1 Samsung 1tb SSD
    2 Samsung 970 EVO Plus 1000GB M.2 2280 PCI Express 3.0 x4 (NVMe)
    1 Seagate Iron Wolf 12TB HD
    NZXT Kraken x62
    ASUS*ROG Strix GeForce RTX 2080 Ti Gaming Advanced Edition 11GB
    BlackMagic Design Multi Dock with 8tb SSD’s
    BlackMagic Design DeckLink Mini Monitor 4K

    I won’t post benchmarks unless people are interested. But I will say that I have been running resolve editing 4.6k BRAW and there has never been any issue. I have loaded it up with spatial NR and a host of nodes and it plays back at 24fps flawlessly. I tried to run the timeline at 8k and it was playing back at 24fps with an initial lag in the beginning of under one second. The thing that I have been the most impressed by is how cool the graphics card and CPU have stayed throughout heavy work for many hours. Neither have gotten past 45. I have not had the time to do extensive testing but I can say this machine is working really well for resolve. If you have any questions let me know and I will try to get to them.

    Thanks again for all the help
    Ghost-Exterior.jpgGhost-Interior.jpg

    Leave a comment:


  • Frank Glencairn
    replied
    Originally posted by gskguston View Post

    Keeping the system cool was one of my main goals and that is why I went with the water cooling. I have read that this provides a significant reduction in temp.

    Tests have shown over and over, that Noctua fan coolers are outcooling (is that even a word?) any water cooling system, that just adds additional multi points of failure, complication and price.



    To keep your overall system cool, make sure you got a good airflow in your case.

    Leave a comment:


  • Marckusw
    replied
    GSKGuston, that is a nice build and I think you were smart to go with the 2080ti.
    Last edited by Marckusw; 08-12-2019, 02:03 PM.

    Leave a comment:


  • dermot shane
    replied
    i have two main machines in my studio, two more support machines, and another at two machines at home

    at the studio i'm using an HPz820 / dual 12c / 128g / 2x1080Ti / 64Tb raid /10GBe and another z820 with 2x8c / 128g / 2x1080Ti /32Tb raid /10GBe, i switch between them as needed, leap frogging projects

    i have a z800 / 2x6c /96g /1x 1080 / 32Tb raid, mainly used for conforming, fixes, outputs

    and a z400 / 1x 3650 / 16g / 1050Ti that hosts the Drastic scopes, but can run Reslove at HD adquatly should i need it

    and a QNAP NAS with 10GBe / 64Tb

    at home i have z620 / 2x10c / 96g / 1x1080 / 16Tb raid + z400 hosting UltraScope

    all HP

    Leave a comment:


  • gskguston
    replied
    Hey Everyone,

    Firstly, thanks a lot for all the feedback. I have been delving deep on the research side and have decided to change the build. I found a way to have this be a PC only system and that changed the configuration considerably. I should reiterate that this is being built solely to run Resolve and that I wanted to keep the price at around 3000.00USD. When thinking about my specific circumstances and ROI I couldn’t justify more. I realize that there are limitations to this configuration regarding backup capabilities, CPU speed, etc. I also wanted the computer to be movable to an extent and decided to go with a ATX mid-tower. Keeping the system cool was one of my main goals and that is why I went with the water cooling. I have read that this provides a significant reduction in temp. I also privileged the GPU over the CPU as many tests have shown that Resolve is more GPU intensive. I have also limited the RAM because the tests I have seen have shown limited gains from going over 64gb. Again, please let me know what you think and I would also be really interested to hear the specs of systems people are running.

    Mid Tower ATX build

    CPU: Intel Core i9-9900K 3.6 GHz 8-Core Processor
    CPU Cooler: NZXT Kraken X22 Liquid CPU Cooler
    Motherboard: Z390 DESIGNARE ATX LGA1151 Motherboard
    Memory: Corsair Vengeance LPX 64 GB (4 x 16 GB) DDR4-3200 Memory
    Storage: Samsung 970 Evo 1 TB M.2-2280 NVME Solid State Drive
    Storage: Samsung 970 Evo 1 TB M.2-2280 NVME Solid State Drive
    External Storage (editing directly from): Blackmagic Multidock 10G with 4x Samsung 2tb SSD Drives
    External Storage (holding non-active media): 3x G-Raid 20tb
    Video Card: NVIDIA GeForce RTX 2080 Ti 11 GB Founders Edition Video Card
    Case: NZXT H700 ATX Mid Tower Case
    Power Supply: Corsair HX Platinum 1000 W 80+ Platinum Certified Fully Modular ATX Power Supply
    GPU cooler: NZXT*Kraken G12
    GPU cooler: NZXT Kraken X62 Rev 2 73.11 CFM Liquid CPU Cooler
    DeckLink Mini Monitor 4K

    Leave a comment:


  • Marckusw
    replied
    You should be thinking dual xeons on an older board with more ram. V3 can be had for relatively cheap. The I9 8 core hyper threading is really only going to run at 3.6. Goosing a box you are going to render work on is just questionable to downright dumb. (I am senior citizen so I get to be an ass.)

    A non scientific but close to real way to explain this is take a single xeon E5-2697 v3 with 14 cores / 28 threads running at 2.6: 28 x 2.6 = 72.8 ( a reliable refurb is the same price as the I9)
    I9 8 cores / 16 threads running at 3.6 16 x 3.6 = 57.6

    If you run v2 dual xeons (2697 v2 12 cores) you would be at about 2 1/2s the speed. 48 threads x 2.7 = 129.6 (a reliable refurb pair will be $200 more than an I9 8 core)

    And yes I have both a dual xeon box and an I7 8 core box and have run real time ray tracing tests prior to rtx cards with the cpus doing the heavy lifting. The multipliers above are pretty real as regards time in ray tracing or rendering.

    If you using nodes think the more cores and ram the better.

    As regards GPUs I have a couple of older AMDs sitting in boxes on shelves as they just don't measure up to GTXs and RTXs. For the money you are spending a GTX 2080 or better is the way to go. Do a little research and skip gamer tests. Look at multi core render factor.

    I do understand your Radeon versus Nividea due to the hackintosh scenario.

    Hope this helps.

    Leave a comment:


  • DPStewart
    replied
    I'm not convinced about this Z390 Motherboard platform...

    Locks you at a Max of 64gig of RAM. That worries me a wee bit.
    Seems to be both an upgrade AND a downgrade from the X299 platform. I still can't figure out exactly why it was introduced, except to be cheaper for Gaming PCs.

    Leave a comment:


  • dermot shane
    replied
    if you have 8TB of storage fast enough to cover cacheing needs, and 60Tb of reliable / raided storage for cam orig, what's the use for the internal 2Tb SSD?

    and i'd see no need for more than 500g for the sys drive, but i would put two of them into RAID0 if dualboot windoze/hack allows that

    the sys drive is fail point unless raided, the BMD "array" is a fail point unless it's set to 6tb useable RAID5, are the GTech drive RAID5/10? if not they are also a fail point..

    i have all machines / drives under a cover, and raised 8" above the floor in case a fire alarm triggers the sprinkler system, that's happened twice in the last 20 years, and in one case everything was up and running a few hours later, in the other case it was a clients faclity that was flooded by sprinklers and they lost all their arrays includeing backups, they were very lucky that we had conformed the film earleir that week and i had all the rushes used in the show on my system already downtown.... this was ona $16 million feature with hard deadlines to be in theatres

    as you've noticed - i take care for protect my cleints deadlines from a failure at my end....

    power supply @ 750w seems marginal to me, i have 1200w in my machines, that's all about stability really, one place where more truly is better

    i'd avoid the AMD GPU unless you are going to be in Hack most of 24/7
    16gVram is a waste, 11g is workable for UHD/DCi4k deliverables

    no idea about the CPU, i run dual xeon's trouble free and it's a rare day when the CPU is a bottleneck, only when rendering DCP's, but i see around 60Fps on DCi2k and 20 on 4k, so not a big deal either
    Last edited by dermot shane; 07-27-2019, 09:50 AM.

    Leave a comment:


  • gskguston
    replied
    dermot shane you are absolutely correct. I should have mentioned that I will be using a Black Magic Multidock with 4x 2tb SSD’s that I already own. It is configured to RAID 0 and is very very fast. For storage I will be using 3x G-Raid 20tb’s. I am going with these at the moment because I travel a lot to shoot and need something somewhat portable. In the future I really want to get a giant RAID system that lives in my editing suite.

    Leave a comment:


  • dermot shane
    replied
    will not get too far on a feature with only 3Tb working disk space, i typcialy see about 6-8Tb per feature by the end of the process

    Leave a comment:


  • gskguston
    replied
    Hey Everyone,

    I just wanted to post the build I am considering putting together. There has been a lot of great feedback here and I thought maybe it could be useful for others. In the end I have to run a machine that can flip between PC and MAC which explains why I did not opt for a lot of the recommended hardware. I also have to stay within a budget of about $3400.00. I have scoured the web for info mainly about graphics cards and came to the conclusion that the Radeon VII 16GB HBM2 PCI-E DP/HDMI Vega 7nm Video Graphics Card would be best for a PC and MAC build. If anyone has used it here it would be great with more real-world feedback. So let me know what you think?

    CPU
    Intel Core i9-9900K Desktop Processor 8 Cores up to 5.0 GHz Turbo unlocked LGA1151 300 Series 95W


    Motherboard
    GIGABYTE Z390 DESIGNARE Gigabyte (Intel LGA1151/Z390/ATX/2xM.2/Thunder bolt 3/Onboard AC Wifi/12+1 Phases Digital Vrm/Motherboard)

    GPU
    PowerColor Radeon VII 16GB HBM2 PCI-E DP/HDMI Vega 7nm Video Graphics Card


    SSD 1
    Samsung SSD 970 EVO 2TB - NVMe PCIe M.2 2280 SSD (MZ-V7E2T0BW)

    SSD2
    Samsung 970 EVO 1TB - NVMe PCIe M.2 2280 SSD (MZ-V7E1T0BW)


    RAM
    HyperX Predator Black 64GB kit 3600MHz DDR4 CL17 DIMM XMP Desktop PC Memory (HX436C17PB3K4/64)

    Power Supply
    EVGA SuperNOVA 750 G3, 80 Plus Gold 750W, Fully Modular, Eco Mode with New HDB Fan

    CPU Cooler
    CORSAIR Hydro Series H55 AIO Liquid CPU Cooler, 120mm Radiator, 120mm Fan


    Leave a comment:


  • Ralph B
    replied
    Originally posted by DPStewart View Post
    Anyone have any solid testing information (like Puget Systems') that shows the Threadrippers really solidly matching up to the Intels? I'm not interested in saving a few hundred bucks, so price doesn't matter to me when it comes to CPUs.
    Well, last year I built a new machine using a 16 core Threadripper 2950x and in Davinci it was aproximately 50% slower than my 4 year old, 8 core 5960X which was running at 4.2 GHz. To say the least I was shocked. And this was using the same physical video card, a 1080 ti.
    I thought I might have done something wrong in the build, but when I ran a Cinema 4D render, the Threadripper was exactly twice as fast as the 5960x, which was right in line with what you would expect.

    So, yeah, I'm very wary of AMD chips in Davinci. But, ever the optimist, I'm hoping the new 16 core Ryzen 3950x will be good. Heck, even the 12 core 3900x might be a winner with Davinci. But so far I haven't seen any tests.
    Last edited by Ralph B; 07-24-2019, 05:52 PM.

    Leave a comment:


  • DPStewart
    replied
    Last year Puget Systems was showing the Threadrippers being SLOWER than Intel CPUs when running many Resolve tasks. And I have heard plenty of stories of bugs and issues when running the Threadrippers with nVidea boards like the 1080Tti in a "video" style system.
    For these reasons I'm still inclined to stick with the Intel i7's and i9's.... for now.
    I'd love to see the new Threadrippers achieve the stability in Video-System hard testing like Puget Systems does.
    And what's not to like about more PCIe lanes.

    Also there's the matter of Motherboard quality and stability. So far I'm seeing the X299 boards doing really well - despite the higher costs. I absolutely do not want to be wrestling with system buggaboos originating from the chipset platform and motherboards.

    Anyone have any solid testing information (like Puget Systems') that shows the Threadrippers really solidly matching up to the Intels? I'm not interested in saving a few hundred bucks, so price doesn't matter to me when it comes to CPUs.

    Leave a comment:


  • Ralph B
    replied
    Hope you'll run some tests when it's convenient.

    Leave a comment:


  • dermot shane
    replied
    right now i have two films up against a deadline for TIFF plus a some commericals and another feature picloks today... need the horsepower untill at least mid august

    Leave a comment:

Working...
X