Doron Muzar
Doron Muzar

Reputation: 443

Im trying to extract all links from a website but only some of the links are extracted why?

In The new class i have a method :

public List<string> test(string mainUrl, int levels)
        {
            List<string> csFiles = new List<string>();
            wc = new System.Net.WebClient();
                HtmlWeb hw = new HtmlWeb();
                List<string> webSites;
                csFiles.Add("temp string to know that something is happening in level = " + levels.ToString());
                csFiles.Add("current site name in this level is : " + mainUrl);
                try
                {
                    HtmlAgilityPack.HtmlDocument doc = TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", "");

                        currentCrawlingSite.Add(mainUrl);
                        webSites = getLinks(doc);

In the method i have the variable doc that is called from the class TimeOut where i download the url:

class MyClient : WebClient
        {
            public bool HeadOnly { get; set; }
            protected override WebRequest GetWebRequest(Uri address)
            {
                WebRequest req = base.GetWebRequest(address);
                if (HeadOnly && req.Method == "GET")
                {
                    req.Method = "HEAD";
                }
                return req;
            }
        }

        public static HtmlAgilityPack.HtmlDocument getHtmlDocumentWebClient(string url, bool useProxy, string proxyIp, int proxyPort, string usename, string password)
        {
            try
            {
                doc = null;
                using (MyClient clients = new MyClient())
                {
                    clients.HeadOnly = false;
                    byte[] body = clients.DownloadData(url);
                    // note should be 0-length
                    string type = clients.ResponseHeaders["content-type"];
                    clients.HeadOnly = false;
                    // check 'tis not binary... we'll use text/, but could
                    // check for text/html
                    if (type == null)
                    {
                        return null;
                    }
                    else
                    {
                        if (type.StartsWith(@"text/html"))
                        {
                            string text = clients.DownloadString(url);


                            doc = new HtmlAgilityPack.HtmlDocument();
                            WebClient client = new WebClient();
                            //client.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
                            client.Credentials = CredentialCache.DefaultCredentials;
                            client.Proxy = WebRequest.DefaultWebProxy;
                            if (useProxy)
                            {
                                //Proxy                
                                if (!string.IsNullOrEmpty(proxyIp))
                                {
                                    WebProxy p = new WebProxy(proxyIp, proxyPort);
                                    if (!string.IsNullOrEmpty(usename))
                                    {
                                        if (password == null)
                                            password = string.Empty;
                                        NetworkCredential nc = new NetworkCredential(usename, password);
                                        p.Credentials = nc;
                                    }
                                }
                            }
                            doc.Load(client.OpenRead(url));

                        }
                    }
                }
            }
            catch (Exception err)
            {

            }
            return doc;
        }

        private static string GetUrl(string url)
        {
            string startTag = "Url: ";
            string endTag = " ---";
            int startTagWidth = startTag.Length;
            int endTagWidth = endTag.Length;
            int index = 0;
            index = url.IndexOf(startTag, index);
            int start = index + startTagWidth;
            index = url.IndexOf(endTag, start + 1);
            string g = url.Substring(start, index - start);
            return g;
        }

Then in the first class i have this method:

private List<string> getLinks(HtmlAgilityPack.HtmlDocument document)
        {

                List<string> mainLinks = new List<string>();
                var linkNodes = document.DocumentNode.SelectNodes("//a[@href]");
                if (linkNodes != null)
                {
                    foreach (HtmlNode link in linkNodes)
                    {
                        var href = link.Attributes["href"].Value;
                        if (href.StartsWith("http://") == true || href.StartsWith("https://") == true || href.StartsWith("www") == true) // filter for http 
                        {
                            mainLinks.Add(href);
                        }
                    }
                }

                return mainLinks;


        }

So for example lets say the main url is:

https://github.com/jasonwupilly/Obsidian/tree/master/Obsidian

There i can see more then 10 links. But in fact when i put a breakpoint after the line: webSites = getLinks(doc); I see only 7 links inside. webSites is List type

Why i see only 7 links if on the main url there are more then 10 links and they are all start with http or https or www

I think that maybe something with the method : getLinks Is not right. For some reason it's not getting all the links.

Upvotes: 1

Views: 131

Answers (1)

Thomas Levesque
Thomas Levesque

Reputation: 292615

I suspect some links have a relative URL (e.g. href="/foo/bar/"), and they're filtered out by your condition that href should start with "http://" or "https://". In those cases you should combine the relative URL with the URL of the page:

Uri baseUri = new Uri(pageUrl);
Uri fullUri = new Uri(baseUri, relativeUrl);

Upvotes: 1

Related Questions