Khaled Mohamed
Khaled Mohamed

Reputation: 217

Simple web crawler in C#

I have created a simple web crawler but I want to add the recursion function so that every page that is opened I can get the URLs in this page, but I have no idea how I can do that and I want also to include threads to make it faster. Here is my code

namespace Crawler
{
    public partial class Form1 : Form
    {
        String Rstring;

        public Form1()
        {
            InitializeComponent();
        }

        private void button1_Click(object sender, EventArgs e)
        {
            
            WebRequest myWebRequest;
            WebResponse myWebResponse;
            String URL = textBox1.Text;

            myWebRequest =  WebRequest.Create(URL);
            myWebResponse = myWebRequest.GetResponse();//Returns a response from an Internet resource

            Stream streamResponse = myWebResponse.GetResponseStream();//return the data stream from the internet
                                                                       //and save it in the stream

            StreamReader sreader = new StreamReader(streamResponse);//reads the data stream
            Rstring = sreader.ReadToEnd();//reads it to the end
            String Links = GetContent(Rstring);//gets the links only
            
            textBox2.Text = Rstring;
            textBox3.Text = Links;
            streamResponse.Close();
            sreader.Close();
            myWebResponse.Close();




        }

        private String GetContent(String Rstring)
        {
            String sString="";
            HTMLDocument d = new HTMLDocument();
            IHTMLDocument2 doc = (IHTMLDocument2)d;
            doc.write(Rstring);
            
            IHTMLElementCollection L = doc.links;
           
            foreach (IHTMLElement links in  L)
            {
                sString += links.getAttribute("href", 0);
                sString += "/n";
            }
            return sString;
        }

Upvotes: 13

Views: 72564

Answers (4)

Darius Kucinskas
Darius Kucinskas

Reputation: 10661

I fixed your GetContent method as follow to get new links from crawled page:

public ISet<string> GetNewLinks(string content)
{
    Regex regexLink = new Regex("(?<=<a\\s*?href=(?:'|\"))[^'\"]*?(?=(?:'|\"))");

    ISet<string> newLinks = new HashSet<string>();    
    foreach (var match in regexLink.Matches(content))
    {
        if (!newLinks.Contains(match.ToString()))
            newLinks.Add(match.ToString());
    }

    return newLinks;
}

Updated

Fixed: regex should be regexLink. Thanks @shashlearner for pointing this out (my mistype).

Upvotes: 12

Misterhex
Misterhex

Reputation: 939

i have created something similar using Reactive Extension.

https://github.com/Misterhex/WebCrawler

i hope it can help you.

Crawler crawler = new Crawler();

IObservable observable = crawler.Crawl(new Uri("http://www.codinghorror.com/"));

observable.Subscribe(onNext: Console.WriteLine, 
onCompleted: () => Console.WriteLine("Crawling completed"));

Upvotes: 8

Connor
Connor

Reputation: 21

The following includes an answer/recommendation.

I believe you should use a dataGridView instead of a textBox as when you look at it in GUI it is easier to see the links (URLs) found.

You could change:

textBox3.Text = Links;

to

 dataGridView.DataSource = Links;  

Now for the question, you haven't included:

using System.  "'s"

which ones were used, as it would be appreciated if I could get them as can't figure it out.

Upvotes: 2

Tom
Tom

Reputation: 1

From a design standpoint, I've written a few webcrawlers. Basically you want to implement a Depth First Search using a Stack data structure. You can use Breadth First Search also, but you'll likely come into stack memory issues. Good luck.

Upvotes: 0

Related Questions