Reputation: 3314
I'm writing a c# console application to retrieve table info from an external html web page.
Example web page: (chessnuts.org)
I want to extract all <td>
records for data
,match
,opponent
,result
etc - 23 rows in example link above.
I've no control of this web page which unfortunately isn't well formatted so options I've tried like the HtmlAgilityPack
and XML
parsing simply fail. I have also tried a number for RegEx's but my knowledge of this is extremely poor, an example I tried below:
string[] trs = Regex.Matches(html,
@"<tr[^>]*>(?<content>.*)</tr>",
RegexOptions.Multiline)
.Cast<Match>()
.Select(t => t.Groups["content"].Value)
.ToArray();
This returns a complete list of all <tr>
's (with many records I don't need) but I'm then unable to get the data from this.
UPDATE
Here is an example of the use of HtmlAgilityPack
I tried:
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(html);
foreach (HtmlNode table in doc.DocumentNode.SelectNodes("//table"))
{
foreach (HtmlNode row in table.SelectNodes("tr"))
{
foreach (HtmlNode cell in row.SelectNodes("td"))
{
Console.WriteLine(cell.InnerText);
}
}
}
Upvotes: 0
Views: 332
Reputation: 445
If you want a full program:). I looked for this for hours.
class ReadHTML {
internal void ReadText()
{
try
{
FolderBrowserDialog fbd = new FolderBrowserDialog();
fbd.RootFolder = Environment.SpecialFolder.MyComputer;//This causes the folder to begin at the root folder or your documents
if (fbd.ShowDialog() == DialogResult.OK)
{
string[] files = Directory.GetFiles(fbd.SelectedPath, "*.html", SearchOption.AllDirectories);//change this to specify file type
SaveFileDialog sfd = new SaveFileDialog();// Create save the CSV
//sfd.Filter = "Text File|*.txt";// filters for text files only
sfd.FileName = "Html Output.txt";
sfd.Title = "Save Text File";
if (sfd.ShowDialog() == DialogResult.OK)
{
string path = sfd.FileName;
using (StreamWriter bw = new StreamWriter(File.Create(path)))
{
foreach (string f in files)
{
var html = new HtmlAgilityPack.HtmlDocument();
html.Load(f);
foreach (var table in html.DocumentNode.SelectNodes("//table").Skip(1).Take(1))//specify which tag your looking for
{
foreach (var td in table.SelectNodes("//td"))// this is the sub tag
{
bw.WriteLine(td.InnerText);// this will make a text fill of what you are looking for in the HTML files
}
}
}//ends loop of files
bw.Flush();
bw.Close();
}
}
MessageBox.Show("Files found: " + files.Count<string>().ToString());
}
}
catch (UnauthorizedAccessException UAEx)
{
MessageBox.Show(UAEx.Message);
}
catch (PathTooLongException PathEx)
{
MessageBox.Show(PathEx.Message);
}
}//method ends
}
Upvotes: 0
Reputation: 65049
I think you just need to fix your HtmlAgilityPack
attempt. This works fine for me:
// Skip the first table on that page so we just get results
foreach (var table in doc.DocumentNode.SelectNodes("//table").Skip(1).Take(1)) {
foreach (var td in table.SelectNodes("//td")) {
Console.WriteLine(td.InnerText);
}
}
This dumps a heap of data from the results table, one columns per line, to the console.
Upvotes: 1