Dan Heller's Photography Business Blog Industry analysis from www.danheller.com

The photography world -- the business, the culture, the art, the politics, the technology.

Site Feed

Subscribe to
Posts [Atom]

My Photo
Name:
Location: Santa Cruz, California, United States
My Books on the
Photography Business

Friday, October 16, 2009

Might Picscout Ultimately Cause Yahoo to Acquire Getty?

I realize the title of this blog is rather provocative. But let me lead you through this.

It all starts with David Sanger's blog on picscout's new Image Registry and Image Exchange, which is the system that Picscout uses to index images and bring buyers and sellers together through third-party licensors. David makes insightful comments on three critical points.

First, his point #2:
Picscout aims to take a percent of sales, noting on their site: “ImageExchange acts as an online affiliate program, sharing image-licensing income between PicScout and licensors.” This will reduce the percent that goes to the photographer.


David is not the first to observe this, but it illustrates how the big picture is being missed. The premise begins with the fact that the universe of images users (some of whom are active buyers, but most of whom are not) use applications that produce documents (digital and print). Those applications are developed by third party Independent Software Vendors (ISVs), such as Adobe or Microsoft. If the applications that ISVs produce adopt the Picscout API to hook into the registry to identify images the user is using in his document, those users will not only be automatically notified they are using copyrighted images, but will also be given the opportunity to license them. This concept isn't far-fetched--exactly the same thing is done when users try to view movies or listen to songs on some devices.

However, because such a thing is not yet done for images, it has the potential to transform the stock licensing industry. If enough ISVs adopt the API and hook into the registry, a critical mass of users will be invariably recruited into the photo licensing economy. The more ISVs that adopt this API, the more applications will be using them, which casts a wider and wider net of users... who themselves become image buyers.

Here's the hitch: those ISVs will not adopt the API unless they have a stake in the game. That is, a cut of the license revenue. Unless someone has another carrot to wave in front of those ISVs, that's the only way to get them to participate in the program. If ISVs don't adopt the API, this whole discussion is moot. No one uses the registry. Game Over.

Therefore, the game is to capture the ISVs. And the only financial incentive they can possibly have is to participate in the licensing model--that is, a rev-share. This has the even greater advantage of giving the ISV even more incentive to get their own users to license images. The more they license, the more money the ISV makes. The ISVs will not just promote these features, but they may make it pretty darn difficult for users to avoid these features.

Imagine what Adobe would do if they had the ability to get a cut of a $10B economy if they just added a feature into InDesign that assured that the photos being used in any given document was properly licensed.... much the same way an iPod assures that the movie it's about to play has been purchased.

This is the same model I've described in my article, The Economics of Migrating from Web 2.0 to Web 3.0: convert the vast majority of image users into image buyers, and sales volumes go way up.

So, that David observes that photographers' percentage of royalty goes down is a true statement, but one that clearly misses the big picture. Obviously the ISV rev-sharing cuts the pie into smaller slices, but a smaller slice of a much larger pie.

David then makes another keen observation in point #7 about Picscout's underlying technology:
Evaluating an entire page of thumbnails is time-consuming. Each thumbnail must be downloaded and analyzed by the PicScout servers before returning index comparison results...


Though David only cites the Google search as an example of how users expect "speed," this is only the tip of the iceberg. Picscout's web browser plug-in that examines google searches is merely a prototype to demonstrate how the API works. Once again, the real goal is to capture ISVs.

But David's observation is more prescient than he may have thought, for performance is probably even more important than rev-sharing by ISVs. If their apps degrade in performance by using the Picscout API, they won't use it, irrespective of rev-share.

The technology Picscout has introduced is clearly first-stage prototypes to introduce the business model and be the first on the map. Yet, it's also Picscout's Achilles Heel, as there is a race about to ensue.

Let's not be naive: Picscout is not the only company on this track. Image-recognition is a science that's akin to text search: there are many ways to do it--some better than others--but it only needs to perform to minimal threshold for the business model to succeed. Many other factors dictate success or failure. Sure, though Picscout may have superior image-recognition algorithms, that part isn't the crowned jewels. Indeed, there are many companies with image-recognition algorithms, Google being one of them.

The real challenge is to build a network protocol that can communicate image information between a client and a server as quickly as possible, using as little network bandwidth as possible. Then, this mechanism needs to scale up to service huge volumes of requests from huge numbers of applications on the net. Picscout may be the first to introduce the proof-of-concept and a prototype, but the real race is on the back-end... as David pointed out.

On the surface, this would seem difficult -- and it is -- but it's hardly new. All large-scale social-network sites do this on a regular basis, from twitter to facebook to Flickr. Though cloud-computing is mature, the real barrier to entry here is the costly capital investment necessary to run such a service. There are many players in the field that already have this infrastructure. By comparison, Picscout would have a harder time ramping up to that level of computing resources than it would for a larger company to find some sort of image-recognition technology (if they don't already have one).

For now, the game is Picscout's to lose, since they're first. But "first" players often find themselves in catch-up soon thereafter. If they even moderately demonstrate viability in the concept, much larger players (such as photo-sharing sites) who have such resources already will be quick to swoop in.

Lastly, David notes in his point #3:
If buyers find it easier to find images through web search they will move away from distributor sites for search, and only use the distributor site for the final licensing.


Yes. Exactly. But that's nothing new. It's been that way since about 2002 now, a fact that I've been pounding on since that time: The vast number of licensed images are done on a peer-to-peer basis directly between buyers and photographers. Stock agencies have suffered because they've missed this point, and have since struggled in trying to figure out how to fight their way out of the paper bag.

But that struggle will end without their having to do much about it. With the combination of image-recognition and web-crawling, the emerging business model Picscout is attempting is now a Fait accompli. That is, David is correct to say that stock agencies of today will become nothing more than hosting sites and clearing houses that supply inventory to other middle-man sites (like Picscout) that do the real job of pairing buyers and sellers.

But is this really a bad thing? He says it in a way that suggests that agencies somehow preserve stock prices. Let's not forget that if ISVs and others realize there's money to be made, they don't want to under-price inventory too. If you want to preserve price stability, convert the social-networks from photo-sharing into photo-licensing businesses.

I've nothing against agencies, but their future will require them to do two things they never did before--in fact, that they avoided: rank well in search engines (so that end-users are more likely to find content in the first place), and attract as much content as possible. That is, stop being editors. Let any and all images in, and let the natural ranking abilities of search engines and social-networks be the real editors. To date, stock agencies have neither sufficient content volume or web-ranking in search results, nor do they employ social-network aspects to their sites to attract users in high volumes. (Again, their head was in the sand for too long.)

So the question is, who can do this? Answer: Photo-sharing social networks.

Back in 2008, I posted an article titled, Stock Photography, the Consumer, and the Future that forecasts this very phenomenon. Once the realization that there's lots of money to be made by creating a streamlined and automated image-licensing mechanism, the sleeping giants of the photo-sharing social networks will awaken and bulldoze over the traditional stock agencies in ways that no one would have believed.

Indeed, I wrote in January, 2008 in an article titled, Pulling the Flickr sword out of the Yahoo stone:
Flickr is one of the very few photo-asset powerhouses on the web that could monetize its content in ways that would exceed even modest expectations.
In fact, I also wrote in an article titled, The Solution to Getty's Woes that Getty should acquire Flickr for this very reason.

But times have changed considerably since then -- Getty has shrunk in size, and Yahoo! has recovered handsomely. Getty could never acquire Flickr now... but if this whole business model of using image-recognition as a vehicle for licensing images shows promise, then I wouldn't be surprised if Yahoo! starts casting devious stares towards Getty.

Hmmmm......

Labels: , , , , , , , , , ,