Reputation: 190
I feed my website with info coming from a table in another website. I used to get the needed info with:
$html = file_get_contents('http://www.example.ex');
and then work with it through regular expressions.
Unfortunately, the other website has changed, and now the source code is not an HTML table anymore.
But, if I Inspect the element with the info (Chrome browser) I find out it is a table, and I can "copy" the "Outer-HTML" of that element and "paste" it into my files.
Is there any other way, more "professional", to capture that info (the Outer-HTML of an element or the whole page), than copy-paste? Thanks to everyone.
Upvotes: 0
Views: 283
Reputation: 130
Maybe this post is useful to you : Stackoverflow Post
But if this doesn't work. Someone over there suggests a PHP web scraper Framework called Goutte which could be (more) useful to you if the website changes again.
Upvotes: 1