Kosie
Kosie

Reputation: 441

Golang: Processing 5 huge files concurrently

I have 5 huge (4 million rows each) logfiles that I process in Perl currently and I thought I may try to implement the same in Go and its concurrent features. So, being very inexperienced in Go, I was thinking of doing as below. Any comments on the approach will be greatly appreciated. Some rough pseudocode:

var wg1 sync.WaitGroup
var wg2 sync.WaitGroup

func processRow (r Row) {
    wg2.Add(1)
    defer wg2.Done()
    res = <process r>
    return res
}

func processFile(f File) {
    wg1.Add(1)
    open(newfile File)
    defer wg1.Done()
    line = <row from f>
    result = go processRow(line)
    newFile.Println(result) // Write new processed line to newFile
    wg2.Wait()
    newFile.Close()

}

func main() {

    for each f logfile {
        go processFile(f)
    }
    wg1.Wait()
}

So, idea is that I process these 5 files concurrently and then all rows of each file will in turn also be processed concurrently.

Will that work?

Upvotes: 4

Views: 2814

Answers (1)

Adam Smith
Adam Smith

Reputation: 54173

You should definitely use channels to manage your processed rows. Alternatively you could also write another goroutine to handle your output.

var numGoWriters = 10

func processRow(r Row, ch chan<- string) {
    res := process(r)
    ch <- res
}

func writeRow(f File, ch <-chan string) {
    w := bufio.NewWriter(f)
    for s := range ch {
        _, err := w.WriteString(s + "\n")
    }

func processFile(f File) {
    outFile, err := os.Create("/path/to/file.out")
    if err != nil {
        // handle it
    }
    defer outFile.Close()
    var wg sync.WaitGroup
    ch := make(chan string, 10)  // play with this number for performance
    defer close(ch) // once we're done processing rows, we close the channel
                    // so our worker threads exit
    fScanner := bufio.NewScanner(f)
    for fScanner.Scan() {
        wg.Add(1)
        go func() {
            processRow(fScanner.Text(), ch)
            wg.Done()
        }()
    }
    for i := 0; i < numGoWriters; i++ {
        go writeRow(outFile, ch)
    }
    wg.Wait()  
}

Here we have processRow doing all the processing (I assumed to string), writeRow doing all the out I/O, and processFile tying each file together. Then all main has to do is hand off the files, spawn the goroutines, et voila.

func main() {
    var wg sync.WaitGroup

    filenames := [...]string{"here", "are", "some", "log", "paths"}
    for fname := range filenames {
        inFile, err := os.Open(fname)
        if err != nil {
            // handle it
        }
        defer inFile.Close()
        wg.Add(1)
        go processFile(inFile)
    }
    wg.Wait()

Upvotes: 9

Related Questions