Google has just released a new version of the AdWords API. There’s nothing spectacular – unless you’re a Quality Score geek.

The API version v201601 comes with three new fields for the KEYWORDS_PERFORMANCE_REPORT :

- SearchPredictedCtr
- CreativeQualityScore
- PostClickQualityScore

You’ve seen these three before:

What gets me excited is the chance to get these three factors into a report, allowing for further analysis on a massive scale. So I downloaded a report with a lot of keywords and analyzed the results in Excel.

**Results**

I looked at the combinations of the three factors and what Quality Scores they lead to. The complete results are shown on the right. A plus stands for above average, a zero is average and a minus means below average.

The first thing I found was that each combination of the three factors always leads to the same Quality Score. For example, if *expected CTR* and *ad relevance* are above average and *landing page experience* is just average (+ + 0 in the table), that’s always an 8. There were no exceptions.

What’s more interesting is that *ad relevance* is less important than the other two factors. For example, a 10 requires all three factors above average. Expected CTR or landing page experience going down to average leads to an 8, whereas losing ad relevance still gets you a 9.

More important is the difference when you look at it from an optimization perspective: Improving *ad relevance* by one (from below to average or from average to above) always improves Quality Score by one. Improving *expected CTR* or *landing page experience* on the other hand will often improve Quality Score by two points. Both factors seem to be equally important.

In any case: Improving one of the factors will always improve Quality Score.

**Weights**

Treating the table with its 27 rows as 27 equations it wasn’t too hard to find weights for each of the three factors.

Each step in *ad relevance* (from below average to average or from average to above average) changes Quality Score by exactly one point. For *expected CTR* and *landing page experience* every step changes Quality Score by 1.75 points.

Using these weights of course leads to some decimal numbers. For example, with all factors average, you get a Quality Score of 5.5 (right in the middle between 1 and 10). But rounding these numbers always leads to the correct Quality Scores (see table below).

**Interpretations**

There are a few ways to interpret these numbers. Apparently, *expected CTR* and *landing page experience* are 75% more important than *ad relevance*. We can also give in to temptation, translate these weights into percentages and make a nice pie chart:

**Consequences**

Using different weights for the factors has some interesting implications: It gets us more scores than just the integers from 1 to 10 (as shown in the table on the right). Instead, we can get 15 different scores. At the same time, not all integers are valid: There’s a 3.75 and a 4.5, but you can’t use the weights to get to a 4.0.

Using the numbers from the report we could calculate more accurate Quality Scores. This could be helpful for tools that report on Quality Score, but there’s little benefit in the added decimals for the day-to-day account management.

For optimization purposes this can also help since it tells you to work on CTR and landing pages rather than ad relevance. However, in practice, CTR and ad relevance are closely related and there’s no reason to improve one without the other.

**The Disclaimer: Auction Quality Score**

While I enjoy diving into keyword Quality Score, I can’t emphasize this point enough: Keyword Quality Score is not the real thing. Keyword Quality Score is a number between 1 and 10 and it’s obviously made up of three factors with three possible values each. What Googles uses in the auction is much more complex.

I’ve long been saying that auction Quality Score is Google’s best estimate of click-through probability (a.k.a. exptected CTR). I don’t think landing pages are considered in the auction at all, but if they are, they’re pretty unimportant (certainly nowhere near 39%).

I don’t want to go into the details again, but be advised that keyword Quality Score has a few shortcomings by design. A big one is its limitation to a maximum of 10, which gives people the impression that there’s nothing to be gained by further optimization. With the three factors the limit is *above average*, which sets the bar even lower.

I believe this can lead to the wrong conclusion: Just because some visible values can’t be improved any further doesn’t mean you should stop optimizing.

If expected CTR is above average – great. But that doesn’t mean that you’re done.

**Update 1: Look at your own data**

Google has added support for this kind of report in AdWords Scripts. With the following script you can create such a report and export it to a spreadsheet. Just run it and look for the spreadsheet’s URL in the logs.

```
function main(){
var spreadsheet = SpreadsheetApp.create("Report");
var report = AdWordsApp.report("SELECT CampaignName, AdGroupName, Criteria, KeywordMatchType, QualityScore, SearchPredictedCtr, CreativeQualityScore, PostClickQualityScore FROM KEYWORDS_PERFORMANCE_REPORT WHERE Id NOT_IN [3000000, 3000006] AND Status = 'ENABLED' AND AdGroupStatus = 'ENABLED' AND CampaignStatus = 'ENABLED'");
report.exportToSheet(spreadsheet.getActiveSheet());
Logger.log("Report available at " + spreadsheet.getUrl());
}
```

**Update 2: SEL Post**

A very similar analysis has been posted by Brad Geddes on SearchEngineLand. I’m still waiting to hear back as to how this happenend.

Martin Roettgerding is the head of SEM at SEO/SEM agency Bloofusion Germany. On Twitter he goes by the name @bloomarty, and you can find him regularly on #ppcchat.