Googlebot
Googlebot

Reputation: 15683

How major websites capture thumbnails from a link?

When sharing a link in major websites like Digg and Facebook; it will create thumbnails by capturing main images of the page. How they catch images from a webpage? Does it included loading the whole page (e.g. by cURL) and parsing it (e.g. with preg_match) ? To me, this method is slow and unreliable. Does they have a more practical method?

P.S. I think there should be a practical method for quick crawling the page by skipping some parts (e.g. CSS and JS) to reach src attributes. Any idea?

Upvotes: 4

Views: 1224

Answers (4)

crizCraig
crizCraig

Reputation: 8897

JohnD's answer shows that Reddit uses embed.ly as part of their Python solution. Really embed.ly does the hard part of finding the image and they're free under 10,000 requests/mo.

Upvotes: 2

JohnD
JohnD

Reputation: 4002

They typcailly look for an image on the page, and scale it down on their servers. Reddit's scraper code shows a good deal of what they do. The Scraper class should give you some good ideas on how to tackle this.

Upvotes: 2

ceejayoz
ceejayoz

Reputation: 180065

They generally use a tool like webkit2png.

Upvotes: 0

Gerben
Gerben

Reputation: 16825

Some use

 <link rel="image_src" href="yourimage.jpg" /> 

included in the head of the page. See http://www.labnol.org/internet/design/set-thumbnail-images-for-web-pages/6482/

Facebook uses

<meta property="og:image" content="thumbnail_image" />

see: http://developers.facebook.com/docs/share/#basic-tags

Upvotes: -1

Related Questions