How powerful will the GTX 1180 be?

What’s happening

Nvidia’s GTX 1180 or 2080 graphics card (or whatever Nvidia decides to call it) has been long awaited. It has been over 830 days since the release of the GTX 1080. Let’s see if the wait is worth it and see how powerful the new flagship card might be.


Approach

First bring in the data. Looking at all the cards from the GTX 280 to the current GTX 1080. The most important metrics that are going to help here are floating point performance, synthetic benchmarks, game benchmarks, and transistor count. These tend to be positively correlated with the overall performance of a graphics card, whereas something like memory speed or bus size really depends on the nature of the current architecture.

xx80 <- read.csv("xx80.csv", sep = ";")


Transistor Density

Rescale the data for transistor density in terms of transistors per square inch.

scaled_transistor_density <- (xx80$transistors_millions*1000000)/(xx80$die_size*0.00155)

Moore’s Law predicts that the number of transistors per square inch doubles each 18 months. This may be helpful in predicting future Nvidia transistor density. Let’s create another vector using Moore’s Law. Use GTX 280 transistor density as starting point, and increase by a factor of two every 18 months.

moore_predicted_scaled_transistory_density <- rep(scaled_transistor_density[1], 9)*c(1,2,4,8,16,32,64,128,256)
moore_dates <- c("2009-01-08", "2010-07-08", "2012-01-08", "2012-07-08", "2014-01-08", "2015-07-08", "2017-01-08", "2018-07-08", "2020-01-08")

Let’s see where Nvidia’s transistor density is compared to Moore’s Law.

It looks like Nvidia’s progression in transitor density has not kept up with Moore’s Law. This makes sense, as it has become increasingly more difficult with each generation to keep up such progress. Let’s create a linear model regressing past GTX graphics cards’ transistor densities against time. Time on the x-axis and transistor density on the y-axis. While the specific release date of each graphics card is definitely important, it’s better to avoid using them as factors. Instead, we take the predictor variable, time, as continuous. This helps a lot more with the model’s accuracy, especially since, to us, the release date of each graphics card is more or less arbitrary and in the control of Nvidia. We get the following regression equation:

\[y = 4497000x - 6290000000\]

Our R-squared, or coefficient of determination, is 0.9165. This means our model accounts for 91.65% of the total variation of the transistor density. I.e. our model fits the data very well.

nvidia_transistor_model <- lm(transistors ~ date, data = nvidia_transistors)
summary(nvidia_transistor_model)$r.squared
    ## Multiple R-squared:  0.9165

However, this model can be improved. While Nvidia’s transistor density improvments don’t happen at the exponential level Moore’s Law suggests, the growth is certainly faster than linear. Let’s use the power transform function to see how we can transform our repsonse variable in order to better the fit of the model.

powerTransform(nvidia_transistors[,2])
    ## Estimated transformation parameter
    ##               0.3664345

In order to avoid over-fitting our data, let’s transform our data using square-root, instead of the exact value of 0.366.

Our transformed model is given by the following regression equation:

\[\sqrt{y} = 27.94x - 35200\]
nvidia_transistor_model_2 <- lm(sqrt(transistors) ~ date, data = nvidia_transistors)
summary(nvidia_transistor_model_2)$r.squared
    ## Multiple R-squared:  0.9314

Now we have an R-squared value of 0.9314. An improvement over our previous model. To put this transformation into perspective, we can look at the following regression equations:

Moore's Law model: $$ log(y) = mx + b $$


Initial linear model: $$ y = mx + b $$


Transformed linear model: $$ \sqrt{y} = mx + b $$

So our new transformed model situates transistor density growth slower than Moore’s Law, but faster than the linear model.

Now we use both models to predict transistor densities for an estimated release GTX 1180 release date of August 1, 2018. Divide each by the GTX 1080’s transistor density to calculate what the GTX 1180’s percentage increase in transistor density will be. We plot our new predicted transistor densities on our graph. It looks like we can expect anywhere between 15.6% and 37.4% boost in transistor density with the new GTX 1180.

predict(nvidia_transistor_model, newdata = data.frame("date" = as.Date("2018-08-01")), type = "response")/nvidia_transistors[7,2] - 1
    ## 0.1559908
predict(nvidia_transistor_model_2, newdata = data.frame("date" = as.Date("2018-08-01")), type = "response")^2/nvidia_transistors[7,2] - 1
    ## 0.3742506


Synthetic Benchmarks

Synthetic benchmarks are generally helpful in gauging performance. While their accuracy can sometimes be susceptible to driver issues and software problems, we are going to take them at face-value here.

First add the 3DMark score (a longstanding synthetic benchmark). Now let’s take a look at how these two performance metrics look over time.

xx80$threedmark_score <- c(NA, 3649, 4952, 7672, 10490, 13898, 21787)

Looking at the floating point performance, we can create a linear model. Floating point performance tends to be follow transistor density extremely closely. We can apply the same transformation as we did on the transistor density model.

tflop_model <- lm(score ~ date, tflop_data)
summary(tflop_model)$r.squared
    ## 0.9251189
powerTransform(tflop_data[,2])
    ## Estimated transformation parameter
    ##       0.0868053
tflop_model2 <- lm(sqrt(score) ~ date, tflop_data)
summary(tflop_model2)$r.squared
    ## 0.9815361

Use the predict function on both models similar to what we did with transistor density. Looks like we can expect anywhere between 10,097.98 and 12,328.65 FLOPs with the new GTX 1180 which translates to a 13.8% and 38.95% increase, respectively, from the current GTX 1080.

predict(tflop_model, newdata = data.frame("date" = as.Date("2018-08-01")), type = "response")
    ## 10097.98
predict(tflop_model2, newdata = data.frame("date" = as.Date("2018-08-01")), type = "response")^2
    ## 12328.65

Now let’s look at the percentage increase in 3DMark Score and floating point performance between the generations. Interesting to see that big performance boosts seem to happen every other generation.

xx80$processing_power_single_percentage
    ## NA 1.898374 1.175574 1.954608 1.286779 1.252546 1.781369


Investigating Time Between Launches

GTX 480, 680, and 1080 were big performance improvements. GTX 580, 780, and 980 were small performance improvements. It’s been over 830 days since the release of the GTX 1080. This is 1.88 times greater than the average time in between releases. This sheer amount of time, the cyclical nature of the performance increases, and the previous shortage due to cryptocurrency mining point towards a big performance boost with this next generation. With this in mind, I’d place my money on the higher predictions from our transformed linear models over the regular models.

as.Date(as.character(xx80$launch[2:7])) - as.Date(as.character(xx80$launch[1:6]))
    ## Time differences in days
    ## 442 228 499 366 544 556
mean(as.Date(as.character(xx80$launch[2:7])) - as.Date(as.character(xx80$launch[1:6])))
    ## Time difference of 439.1667 days
as.Date(as.character("2018-07-05")) - as.Date(as.character(xx80$launch[7]))
    ## Time difference of 830 days


Game Benchmarks

Thankfully, TechSpot has available aggregated 1080p and 1600p benchmarks on their site. They are known for accurate benchmarks, so this should not be a problem.

xx80$average_game_perf_1080p <- c(NA, 21, 24, 40, 56, 77, 127)
xx80$average_game_perf_1600p <- c(NA, 30, 35, 53, 71, 109, 161)
game_synthetic_1080p_model <- lm(game_perf_1080p ~ tflop, data = game_synthetic_scores)
summary(game_synthetic_1080p_model)$r.squared
    ## 0.9932335
game_synthetic_1600p_model <- lm(game_perf_1600p ~ tflop, data = game_synthetic_scores)
summary(game_synthetic_1600p_model)$r.squared
    ## 0.9753941

Using our predicted floating point performance metric (12,328.6) from our our previous transformed linear model to make a prediction on gaming performance in average frames per second. We should see a 40% increase in gaming performance.

predict(game_synthetic_1080p_model, 
    newdata = data.frame("tflop" = 12328.65, type = "response"))
    ## 177.4223
predict(game_synthetic_1600p_model, 
    newdata = data.frame("tflop" = 12328.65, type = "response"))
    ## 226.2672


Final Predictions

Predictions estimate the GTX 1180 to come in at 20.3 billion transistors per square inch and 12.3 TFLOPs in floating point performance. In addition to raw compute performance, gaming performance should also see sizeable gains over the GTX 1080.

37% increase in transistor density
39% increase in TFLOPs
40% increase in gaming performance


Sources

TechSpot, Nvidia, ExtremeTech