[I couldn’t resist the Ryzen pun, sorry, it just kinda puts itself out there…]

Before anything else, high five for reading the first real post on this page. I’ve chosen this subject since I’m regularly being asked by my CG peers [pun totally not intended] about workstation hardware specs and similar stuff. The current events being what they are, I decided to make a quick intro to CPUs for people who stare at renders. So without further ado, let’s dig in. [Don’t push the buttons.]

With the release of AMD’s new Ryzen platform based on their Zen microarchitecture the tech waters are stirring all over the world. Many reviews have been published, with various results, and many conclusions have been made, some of which are dubious. Especially when looking from the perspective of someone who renders for a living. I’ll try to explain why I’m excited to see what Ryzen brings, and why it could be good for all of us.

To understand the reason behind Intel’s dominance in the workstation/rendering market, we need to do some CPU archaeology and dig a bit deeper into the building blocks of (and differences between) Intel’s and AMD’s CPUs.

 

 

AMD is cool!

Back in the stone age of computers (aka 1999 AD) we had 32 bit CPUs. Those with a long experience in 3D remember those times as ‘the dark ages’ or ‘the days where every poly had to count’. Luckily we don’t live in those times anymore. In 2000, AMD published what is known as the x64 or AMD64 specification. It was an evolutionary approach to the x86 processor architecture, in the sense that it was perfectly backwards compatible, while enabling a CPU to address 16 exabytes of memory. That’s 16000000000000000000 bytes for you who love counting calories. [Way to go about ‘future proof’, eh?]

At the same time (more or less) HP and Intel devised the Itanium or IA-64 architecture. Which was kinda like the HD-DVD or betamax of the 64bit processor world. Well, not really, but the point is – those computers you have all around you, they ain’t Itaniums.

[On the subject of Itanium, I can totally envision Intel’s marketing department going: ‘yeah, dude, it’s like Titanium, but from Intel you know… like Itanium, get it?]

So, some short time later we got the Athlon64 CPU which was totally 1337. Then Athlon64 X2 (as in two cores, yay!) and a lot of other iterations up to the Phenom II series. In that time of great wonders and progress, on both sides of the proverbial CPU coin, core counts were steadily going up, raw speed was rising all the time… Happy times. But as Nick Cave would put it – ‘all things move toward their end’. Already with the Phenom II series AMD was starting to lose pace, and their CPUs got a bad name for being power-hungry, among other things. Still, the price/performance ratio was kinda there, as per usual for AMD, so if nothing else, that kept things afloat.

Intel is cool!

It was in late 2011 that AMD’s Bulldozer CPU architecture was announced. And to put it bluntly, while designing the architecture AMD made some bets. And lost. It boils down to this: while Intel had HyperThreading which is still around today, which meant a SMT (or ‘simultaneous multithreading’) type of design, AMD decided to implement CMT or ‘clustered multithreading’.

So what’s the difference between those? In layman’s terms a ‘core’ had an integer and a floating point unit inside. With Bulldozer, AMD’s core was actually an integer unit, and a half of a floating point unit, or to be more exact a 2 core module had two integer units and one shared floating point unit. One can assume that the bet AMD made at the design stage is that integer performance will be more important, but in reality it turned out to be the other way around. This is especially applicable to us CG people. Those pesky floating points are always hanging around, messing with our stuff.

The best way to explain the problem is using buckets. We all love our buckets.

So… with a 4 core 8 thread SMT CPU (the Intel kind) you get 8 happy buckets. With a 8 core CMT CPU (the pre-Ryzen AMD kind) you get 8 sad buckets with a split personality, because those sad buckets only work half the time since they have to share the floating point unit among themselves and act more like 4 mostly apathetic cores.

And that’s the grossly oversimplified jist of it. Even AMD admitted on many occasions that Bulldozer was a failure in many regards. To be honest, it’s not that black and white, but I’m only considering the implications for rendering and similar workloads.

[Basically Intel was like: ‘All your floaty points are belong to us!’]

Thus for years now it made no sense to render on anything other than Intel, since the performance was simply way better.

Is Ryzen cool?

Enter Ryzen. AMD was hell bent on correcting previous mistakes as it seems, and they’ve designed the all-new Zen architecture from scratch. It brings support for all the modern bells and whistles (like USB 3.1, NVMe etc.) but what matters for us most is that it’s a genuine SMT based CPU. Which means no more sad buckets! Yay!

Currently, if you’re in the market for a workstation, the usual choices are either an i7 or a (dual)Xeon based machine. Xeons wipe the floor with i7s in rendering due to the sheer number of cores you can stuff on a single motherboard. But those cores are also usually quite slow in comparison to i7s, so in single threaded tasks (pFlow anyone?) most i7 CPUs kick the Xeons into a corner. In my experience, if the workstation will be used for both modeling and rendering, a Xeon system is definitely better, but if the rendering is mostly done on a farm (local or remote) an i7 is a very viable option.

Then there’s the price issue. Thanks to Intel’s product list which is miles long and impossible to summarize, it’s very hard to just plainly state which of the two options here is more affordable and for which purposes. That may be the subject of a future article but suffice to say server components are in the expensive to OMGWTF range, while desktop components are in the affordable to just WTF range. Ultimately, what you get for any given price will be based on the intended use of the machine you’re building.

As if that weren’t enough, there’s the (un)buffered(non)ECC (conund)RAM.

ECC (or ‘error correcting code’) memory is mostly used in server applications and it does one thing that matters to us: reduces the number of crashes, which is sweet. But at the same time, both it and the components that support it are more expensive, which is lame. So the price/performance equation gets even more complicated.

Here’s where Ryzen comes into this story, at least in theory (one which I will hopefully test and share with you as soon as the universe considers me worthy enough of receiving the parts I ordered). Being an 8 core/16 thread CPU, it stands neck to neck with Intel’s (non-server) products which we would consider for workstation purposes, like the i7 6900k for example. Even with Intel’s recent price cuts on some of their products, a 6900k costs $1,021.97 on Amazon US. In comparison, Ryzen R7 1800x (the strongest Ryzen chip in the current lineup) costs $499.

A top of the line socket AM4 (Ryzen) motherboard costs between $180 and $255, while a top of the line socket 2011-3 (i7 6900k and similar) motherboard costs… well… anywhere from $180 to who knows what. Suffice to say, a R7 1800x based system with similar specs will inevitably be substantially cheaper than a 6900k one. And along with the nice price we get better power efficiency (unlike its Bulldozer brethren, Ryzen handles power much better), even faster performance in rendering and similar multithreaded tasks than with a 6900k, and a similar result in single threaded tasks. For me, at least on paper, that looks like a sweet deal. I won’t go into gamer-logic and start comparing it with the likes of Intel’s newest 7700k Kaby Lake CPUs since for the work real men have to do 7700k has far too few buckets to offer.

And that’s not all because in answer to a question on reddit AMD officials said the following:

“ECC works as long as the motherboard supports it, but it is not part of the official validation testsuite.”

We’ll see what motherboard manufactures will do with this, but it’s great news for the workstation market segment. Again, in theory, you get good single thread performance, stable memory, awesome multi thread performance and all that for a low price. If that’s not the definition of a disruptive product, I don’t know what is.

But…

Considering the fact the Ryzen architecture is brand new, there are definitely things that have to be tweaked (it’s an immensely complicated endeavor after all). Whether that will happen soon remains to be seen. While AMD’s marketing team did an outstanding job of creating the hype around the Ryzen release, I’m afraid it ended up being a double-edged sword at the very least. From current accounts around teh interwebz, the… BIOSs (BIOS-es? BII?) are not really all that stable, there are issues with RAM speeds and a lot of other stuff.

The product was obviously rushed to market and the motherboard manufacturers didn’t have enough time to prepare. Given that the world was desperate for some actual competition in the CPU market (Intel’s CPUs have technologically progressed in very small increments for years now, unlike their pricing models) everyone was standing on tip-toes waiting to see if AMD finally has a competitive product again, and that eagerness for seeing results brought on a substantial amount of bad press and mixed feelings.

I can’t really call pushing a product like this to market without being certain it can work to the best of its abilities anything else than a total noob move, but it is what it is, and we will see in the upcoming months if Ryzen can hold its own. Regardless, personally I’m very excited since we haven’t even seen the server side of things yet. AMD was talking of a CPU, code-named Naples, boasting 32 cores and 64 threads. If Naples follows the aggressive pricing of the current R7 line, we could very well have render farms sprouting like mushrooms. [And we’ll have moar buckets. We’ll have all the best buckets.]

I would advise anyone looking into using Ryzen as a CPU for actual work to disregard most of the reviews available online for now, since the logic they use for reaching certain conclusions is… funky… at best. An example of that would be a famous gaming channel which stated: “Yes, it’s definitely better in multi threaded applications, but in production everything gets unloaded to the GPU anyway so it’s not really relevant.” Does it now? I’d like a one to one chat with that guy for a few minutes. Just to, you know… explain some stuff in a friendly manner. In any case – until we see it rendering, pFlowing, encoding and doing the stuff we do every day, I’m not jumping to conclusions. [I won’t even go into how much I care if a 1080p game has 130 instead of 150 FPS.]

[And not to be a dick, but if you have a 1080p monitor in this industry, there’s something wrong with you and it ain’t your CPU, lol.]