01-16-2014, 02:12 PM | #1 | ||
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Computer building help
For work I do a lot of 3D modeling, which is a pretty intense process and can involves millions/billions of calculations. It creates files for a complete model that can be upto 60 gb, which each stage being around 3 gb. A model can take 2 or 3 days to run completely. I've been in contact with the software provider (who also do consulting work) and they put together the specs for a "cruncher" computer. However, to justify upgrading my computer I would need to be able to provide an estimate about how much better the specified machine is then my current setup. We don't have an IT department per say, and I'm not confident in the response I would get from them. I've poked around the web but can't really find any answers. Does anyone here know of a site where I can compare computer builds to each other or something like that?
|
||
01-16-2014, 02:18 PM | #2 | |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
Quote:
There are lots of places that do benchmarks of the components. In your case, it is probably going to be pretty dependent on CPU and graphics cards. If you Google the model names + benchmark, you should get some good data to compare your current set up versus the proposed one.
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
|
01-16-2014, 02:35 PM | #3 | |
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Quote:
I checked out this one: PassMark - CPU Benchmarks - List of Benchmarked CPUs The new cpu (i7-4930K) checks in at: PassMark cpu score/Rank 13484/10 While the old one (i7-2600) is: 8319/89 Is that a big/little/no deal? If I can go from running one model in 5 days to running it in 3 days that would be a big win. The other big deal (maybe?) is that I can move from 16 GB of ram to 64 GB. Last edited by nilodor : 01-16-2014 at 02:36 PM. |
|
01-16-2014, 02:39 PM | #4 |
Coordinator
Join Date: Oct 2000
|
OpenCL And CUDA Are Go: GeForce GTX Titan, Tested In Pro Apps - Can GeForce GTX Titan Handle Professional Workloads?
If you're trying to get them to buy you a Titan or two. |
01-16-2014, 02:41 PM | #5 |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
Just the pure speed shows that the new CPU is about 60% faster. When you run the model now, does your memory max out? If so, that might be a huge source of the delay, since access RAM is an order of magnitude faster than access swap space on a disk. If memory isn't maxing out, then the RAM jump won't make a huge difference.
Also, not sure what kind of calculations the model is using, but does the software vendor say anything about improvements using GPU vs. CPU? I know that for my oil and gas customers, they all use GPUs for the hard core number crunching of things like seismic data.
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
01-16-2014, 02:42 PM | #6 | |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
Quote:
I have a Titan. Is is awesome.
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
|
01-16-2014, 03:00 PM | #7 | |
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Quote:
I have the BMem toolbar. When I run a model it maxes out the CPU bar pretty consistently. I think it chews up 9 or 10 GB of the RAM, but not maxing it out. If the CPU is improved would the computer be able to utilize more of the RAM? I guess I don't necessarily understand what their relationship is when things are getting processed. |
|
01-16-2014, 03:06 PM | #8 |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
No, more CPU wouldn't lead to more RAM usage. For the CPU chart, are you able to show what all 4 cores are doing, or does it just report one aggregated number?
I'm surprised though that the software vendor hasn't mentioned GPU rendering. Most 3D stuff now is being processed with video card GPUs, and not CPUs.
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
01-16-2014, 03:16 PM | #9 | |
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Quote:
FWIW, he said that the video card wasn't that important. Basically how the system runs is a model is comprised of a whole bunch of bricks. For example say you have a box, it's split into 15 little boxes tall and 10 deep. If you add a load to the surface it disturbs the boxes at the surface, which in turn influences those around it. All of the equations are solved discretely and the process continues until the new changes from the previous calculation step is small. As far as what's displayed on the screen it will be a stress or displacement contour that will update, but it's really not a very graphically intense thing. As far as they said as long as the video card is whatever the current version of OpenGL compliant it'll be fine. I don't really know what the difference between a GPU and CPU is as far as what will be used. I think the memory monitor is a mash up of all the cores. It goes to 200% for some reason. Basically when I'm running a model it's full of a pale blue colour and pinging between 150 and 200% in a solid blue colour. It doesn't describe it but maybe it's saying that it's using between 75 and 100% of the available crunching power? |
|
01-16-2014, 03:24 PM | #10 |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
Yeah, the video card usage and GPUs I was referring to isn't really reflected in what is drawn on the screen. GPUs are the brains of the video card, and most of the GPUs on the market now have several hundred cores on them, as opposed to the 4 cores on a CPU. And the GPUs are dedicated number crunchers, which is needed when drawing complex graphics. But the number crunching ability also works well for apps that need to do billions of calculations. One of my customers has a couple of racks full of servers with dual Nvidia cards in each server, and all they do is process raw seismic data all day for oil and gas exploration programs to use. There isn't a single monitor hooked up to them.
If you are seeing a CPU number rise to 200%, it is likely summing up the usage of all of the cores. So two cores at 100% or four cores at 50% utilization.
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
01-16-2014, 04:25 PM | #11 |
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Cool, thanks for the help.
So basically the way forward. Check with the software company about if the program utilizes the CPU or the GPU for calculations, or where the primary bottleneck is. If it's the CPU I can go to my IT guys and say that I could get a roughly 60% gain in computing time (based on the scores earlier) and we can figure out if it's worth it from there. If it does utilize the GPU then look into linking a couple of computers together with decent video cards to carry out the calculations and not worry too much about the CPU's? |
01-16-2014, 04:38 PM | #12 |
Death Herald
Join Date: Nov 2000
Location: Le stelle la notte sono grandi e luminose nel cuore profondo del Texas
|
You wouldn't have to link the computers together. One of them with a good video card (as long as the software supports GPU rendering) would be great. Linking them gets a quite a bit more advanced, but if these kinds of compute models are what you do quite a bit of, it might make sense to look into maybe setting up a rendering farm (again, if the software would support it).
__________________
Thinkin' of a master plan 'Cuz ain't nuthin' but sweat inside my hand So I dig into my pocket, all my money is spent So I dig deeper but still comin' up with lint |
01-17-2014, 06:43 PM | #13 |
College Benchwarmer
Join Date: Oct 2000
Location: calgary, AB
|
Just to finish they off, the software in question does not use the GPU for calculations.
|
01-17-2014, 09:41 PM | #14 |
Head Coach
Join Date: Oct 2005
|
The HD is probably a factor also. With days of processing, there's a lot of read/writes so get a SSD.
Let us know your final rig and performance gains. |
02-27-2015, 02:29 PM | #15 |
Grizzled Veteran
Join Date: Nov 2013
|
I was thinking of getting a new graphics card for an older computer.
Can I put a PCIe 3.0 card into a PCIe 2.0 motherboard slot? Will there be any serious performance degradation?
__________________
"I am God's prophet, and I need an attorney" |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
|
|