temps hd
temps hd

Reputation: 81

How to handle errors in a goroutine

I have a service that use to upload file to AWS S3. I was trying to use with goroutines and without to upload the file. If I upload the file without goroutines, it should wait till finish then give the response, and if I use goroutines it will run in the background and faster to response to the client-side.

How about if that upload failed if I use goroutines? And then that file not uploaded to AWS S3? Can you tell me to handle this how?

here is my function to upload file

func uploadToS3(s *session.Session, size int64, name string , buffer []byte)( string , error) {

    tempFileName := "pictures/" + bson.NewObjectId().Hex() + "-" + filepath.Base(name)

    _, err := s3.New(s).PutObject(&s3.PutObjectInput{
        Bucket:             aws.String("myBucketNameHere"),
        Key:                aws.String(tempFileName),
        ACL:                aws.String("public-read"),
        Body:               bytes.NewReader(buffer),
        ContentLength:      aws.Int64(int64(size)),
        ContentType:        aws.String(http.DetectContentType(buffer)),
        ContentDisposition: aws.String("attachment"),
        ServerSideEncryption: aws.String("AES256"),
        StorageClass:       aws.String("INTELLIGENT_TIERING"),
    })

    if err != nil {
        return "", err
    }

    return tempFileName, err
}

func UploadFile(db *gorm.DB) func(c *gin.Context) {
    return func(c *gin.Context) {
        file, err := c.FormFile("file")

        f, err := file.Open()
        if err != nil {
            fmt.Println(err)
        }

        defer f.Close()
        buffer := make([]byte, file.Size)
        _, _ = f.Read(buffer)
        s, err := session.NewSession(&aws.Config{
            Region: aws.String("location here"),
            Credentials: credentials.NewStaticCredentials(
                    "id",
                    "key",
                    "",
                ),
        })
        if err != nil {
            fmt.Println(err)
        }

        go uploadToS3(s, file.Size, file.Filename, buffer)

        c.JSON(200, fmt.Sprintf("Image uploaded successfully"))
    }
}

I was thinking as well how about if there many request to upload a file over 10000+ per 5-10mins ? would some file can't be upload because too many request?

Upvotes: 0

Views: 1853

Answers (3)

Jonathan Hall
Jonathan Hall

Reputation: 79546

This question is too broad for a single answer. There are, broadly speaking, three possible approaches:

  1. Wait for your goroutines to complete to handle any errors.

  2. Ensure your goroutines can handle (or possibly ignore) any errors they encounter, such that returning an error never matters.

  3. Have your goroutines log any errors, for handling later, possibly by a human, or possibly by some cleanup/retry function.

Which approach is best depends on the situation.

Upvotes: 2

colm.anseo
colm.anseo

Reputation: 22027

For any asynchronous task - such as uploading a file in a background go-routine - one can write the uploading function in such a way to return a chan error to the caller. The caller can then react to the file uploads eventual error (or nil for no error) at a later time by reading from the chan error.

However if you are accepting upload requests, I'd suggest instead to created a worker upload go-routine, that accepts file uploads via a channel. An output "error" channel can track success/failure. And if need be, the error uploaded could be written back to the original upload channel queue (including a retry tally & retry max - so a problematic payload does not loop forever) .

Upvotes: 0

Clément
Clément

Reputation: 804

The problem is that when using a goroutine, you immediately return a success message to your client. If that's really the case, it means your goroutine needs to be able to recover in case of error when uploading to S3 (don't lose the image). So either you take care of that, or you inform asynchronously your client that the upload failed, so the client can re-try.

Upvotes: 1

Related Questions