Click Fraud Survey & Discussion

Wednesday, December 14, 2005

Part 2: Google Sends a Non-Boilerplate Response, HOLY COW!

This is the third (and in my opinion, the most telling) of three very interesting articles about Google's Adwords written by Robert X. Cringely, one of Silicon Valley's brightest minds over the past 20 years. See below for the two other parts.


My friend finally finished his AdWords experiment after 36 days. He was trying to do a Taguchi optimization of his AdWords campaign, something that has so far eluded Taguchi experts including the PhD (my original contact) who was hired to help him. The Taguchi Methods of Robust Design are techniques for testing hundreds or even thousands of variables with only a handful of actual experiments. Taguchi considers the process as a black box and looks only at inputs and outputs. In many ways it is a kind of codified reverse-engineering system, but one that is very well proven since its original invention in 1945.

Alas, Taguchi doesn't work well with AdWords for some reason. Just when things are starting to make sense, they stop doing so. That's the way it was for my friend. Things were going well until suddenly they changed for a five-day period ending with the publication of my first AdWords column. The AdWords black box could be optimized for a while, but then it couldn't be. And then it could be again.

This was annoying for the Taguchi experts who were quite used to optimizing black boxes with thousands of internal variables that are never identified or seen. What was going on here? What was introducing what Taguchi calls "noise factors" that kept the optimization from being achieved?
The Taguchi experts concluded that the noise factor was probably some form of Bayes-Nash equilibrium experiment being conducted by AdWords, itself. The Taguchi expectation of rational behavior was being confounded, they believed, by deliberate manipulation intended to move the Nash Equilibrium point improving mid-to-long-term profit for Google. This is possible because, unlike a traditional Nash Equilibrium experiment where all bids are known, the AdWords algorithm is unknown, though (incorrectly) presumed to be rational. In other words, AdWords was deliberately giving up income in the short term (that's considered irrational behavior) to coax AdWords advertisers to bid higher for words, thus leading to greater revenue and profit for Google in the long term.

Those are the conclusions of the experimenters, not me. So I ran their conclusions by Jeff Huber at Google, who is in charge of engineering for AdWords. I asked a simple question: Is that what you were doing? "In short, no," said Jeff. We are not intentionally introducing "noise factors" or any other perturbations in the style the proposed theory suggests to affect near-term or long-term revenues.
I know it's not quite as exciting, but our model is pretty simple. We want our users to have the most relevant possible content, so we work really hard on an on-going basis to optimize end-user perceived quality. We want our advertisers to have a great return on their investment spent with us. If we do both of these well we'll continue to do fine financially -- more users will come to Google because we provide the most relevant content and the best experience, and more advertisers will come because we provide qualified and cost-effective introductions for their business and they make a lot of money. We may well give up revenue in the near-term to improve end-user perceived quality; this may not appear "rational" to an advertiser (or given modeling method, or wall street analyst for that matter!) to optimize on relevance rather than near-term revenue, but we think it's the right long-term thing to do for users, advertisers, and us.

I'm not an expert on Taguchi methods, but have seen the approach very successfully applied to systems that have relatively linear behavior between inputs & outputs (for example, many websites now use the approach to optimize their home page landing pages based on different test layouts; obviously it has an rich history in manufacturing).

As mentioned in my prior note, the AdWords system is highly dynamic and incorporates user behavior, optimization on end-user relevance (which is continually evolving and improving), competitor behavior, historical performance of advertising campaigns, and non-linear effects based on position, campaign configurations (e.g., rate limiting to manage to advertiser-defined budgets), and policy enforcement (e.g., editorial review on ad creative changes, algorithms to encourage diversity and minimize redundant ads). I think I understand the experiment's spirit and intent, but the implementation could also have run into our double-serving policy where multiple ads are attempting to target the same terms and serving the same or highly similar (re-named) content.

How and if AdWords could be modeled with Taguchi methods, or other approaches, might be an interesting PhD research thesis topic. Without knowing the accounts involved and how the experiment was conducted, it's unfortunately pretty hard to conclude which factors may have been at play in the 5 days that your friends had trouble modeling. I'll reiterate our offer to help investigate if your friend would like; all we'd need is the account(s) used in the experiment.

Who's right? Who's wrong? Does any of this really matter? And is the concept of "evil," which Google claims to avoid, even remotely involved? Beats me.

It isn't really clear what's going on here, though the experimental side was organized and professional enough that I would tend to believe they were observing SOMETHING, whatever was causing it. Even if Google was trying to optimize AdWords in that way, there is nothing illegal in it. If they don't do it, someone else will.

Notice how in Jeff's explanation, above, he said the AdWords system adjusts for maximum profitability for Google and "maximum user perceived quality for the advertisers." Not the maximum return on investment for their AdWords budget. The key word is PERCEIVED.

Finally, several months ago Google (and Overture, too) were quite specifically advertising positions for experts in Bernoulli-Nash Equilibria optimization. These were jobs not for programmers, but Operations Research types. One of those people recruited was a friend of mine for many years. From what he told me Google and Overture were looking for EXACTLY the kind of technical capability described above.

Is this a big deal? Not for most people. Not even for most AdWords users who are probably making plenty of money from their campaigns otherwise -- as we now know from Nash -- they wouldn't be doing it at all. But for those AdWords users like my friend the experimenter, it probably means that truly optimizing an AdWords campaign is going to be a lot harder job. I think it can be done, but it won't be easy.


Post a Comment

<< Home