# How powerful will the GTX 1180 be?

## What’s happening

Nvidia’s GTX 1180 or 2080 graphics card (or whatever Nvidia decides to call it) has been long awaited. It has been over 830 days since the release of the GTX 1080. Let’s see if the wait is worth it and see how powerful the new flagship card might be.

## Approach

First bring in the data. Looking at all the cards from the GTX 280 to the current GTX 1080. The most important metrics that are going to help here are floating point performance, synthetic benchmarks, game benchmarks, and transistor count. These tend to be positively correlated with the overall performance of a graphics card, whereas something like memory speed or bus size really depends on the nature of the current architecture.

## Transistor Density

Rescale the data for transistor density in terms of transistors per square inch.

Moore’s Law predicts that the number of transistors per square inch doubles each 18 months. This may be helpful in predicting future Nvidia transistor density. Let’s create another vector using Moore’s Law. Use GTX 280 transistor density as starting point, and increase by a factor of two every 18 months.

Let’s see where Nvidia’s transistor density is compared to Moore’s Law.

It looks like Nvidia’s progression in transitor density has not kept up with Moore’s Law. This makes sense, as it has become increasingly more difficult with each generation to keep up such progress. Let’s create a linear model regressing past GTX graphics cards’ transistor densities against time. Time on the x-axis and transistor density on the y-axis. While the specific release date of each graphics card is definitely important, it’s better to avoid using them as factors. Instead, we take the predictor variable, time, as continuous. This helps a lot more with the model’s accuracy, especially since, to us, the release date of each graphics card is more or less arbitrary and in the control of Nvidia. We get the following regression equation:

\[y = 4497000x - 6290000000\]Our R-squared, or coefficient of determination, is **0.9165**. This means our model accounts for **91.65%** of the total variation of the transistor density. I.e. our model fits the data very well.

However, this model can be improved. While Nvidia’s transistor density improvments don’t happen at the exponential level Moore’s Law suggests, the growth is certainly faster than linear. Let’s use the power transform function to see how we can transform our repsonse variable in order to better the fit of the model.

In order to avoid over-fitting our data, let’s transform our data using **square-root**, instead of the exact value of **0.366**.

Our transformed model is given by the following regression equation:

\[\sqrt{y} = 27.94x - 35200\]Now we have an R-squared value of **0.9314**. An improvement over our previous model. To put this transformation into perspective, we can look at the following regression equations:

So our new transformed model situates transistor density growth slower than Moore’s Law, but faster than the linear model.

Now we use both models to predict transistor densities for an estimated release GTX 1180 release date of August 1, 2018. Divide each by the GTX 1080’s transistor density to calculate what the GTX 1180’s percentage increase in transistor density will be. We plot our new predicted transistor densities on our graph. It looks like we can expect anywhere between **15.6%** and **37.4%** boost in transistor density with the new GTX 1180.

## Synthetic Benchmarks

Synthetic benchmarks are generally helpful in gauging performance. While their accuracy can sometimes be susceptible to driver issues and software problems, we are going to take them at face-value here.

First add the 3DMark score (a longstanding synthetic benchmark). Now let’s take a look at how these two performance metrics look over time.

Looking at the floating point performance, we can create a linear model. Floating point performance tends to be follow transistor density extremely closely. We can apply the same transformation as we did on the transistor density model.

Use the predict function on both models similar to what we did with transistor density. Looks like we can expect anywhere between **10,097.98** and **12,328.65** FLOPs with the new GTX 1180 which translates to a **13.8%** and **38.95%** increase, respectively, from the current GTX 1080.

Now let’s look at the percentage increase in 3DMark Score and floating point performance between the generations. Interesting to see that big performance boosts seem to happen every other generation.

## Investigating Time Between Launches

GTX 480, 680, and 1080 were *big* performance improvements. GTX 580, 780, and 980 were *small* performance improvements. It’s been over **830** days since the release of the GTX 1080. This is **1.88** times greater than the average time in between releases. This sheer amount of time, the cyclical nature of the performance increases, and the previous shortage due to cryptocurrency mining point towards a *big* performance boost with this next generation. With this in mind, I’d place my money on the higher predictions from our transformed linear models over the regular models.

## Game Benchmarks

Thankfully, TechSpot has available aggregated 1080p and 1600p benchmarks on their site. They are known for accurate benchmarks, so this should not be a problem.

Using our predicted floating point performance metric (**12,328.6**) from our our previous transformed linear model to make a prediction on gaming performance in average frames per second. We should see a **40%** increase in gaming performance.

## Final Predictions

Predictions estimate the GTX 1180 to come in at **20.3 billion** transistors per square inch and **12.3 TFLOPs** in floating point performance. In addition to raw compute performance, gaming performance should also see sizeable gains over the GTX 1080.

*37% increase in transistor density*

*39% increase in TFLOPs*

*40% increase in gaming performance*