Mahmoud Ezzat
Mahmoud Ezzat

Reputation: 253

Node js, piping pdfkit to a memory stream

I am using pdfkit on my node server, typically creating pdf files, and then uploading them to s3.

The problem is that pdfkit examples pipe the pdf doc into a node write stream, which writes the file to the disk, I followed the example and worked correctly, however my requirement now is to pipe the pdf doc to a memory stream rather than save it on the disk (I am uploading to s3 anyway).

I've followed some node memory streams procedures but none of them seem to work with pdf pipe with me, I could just write strings to memory streams.

So my question is: How to pipe the pdf kit output to a memory stream (or something alike) and then read it as an object to upload to s3?

var fsStream = fs.createWriteStream(outputPath + fileName); 
doc.pipe(fsStream);

Upvotes: 16

Views: 16375

Answers (6)

Herosimo Sribiko
Herosimo Sribiko

Reputation: 311

Thanks to Troy's answer, mine worked with get-stream as well. The difference was I did not convert it to base64string, but rather uploaded it to AWS S3 as a buffer.

Here is my code:

import PDFDocument from 'pdfkit'
import getStream from 'get-stream';
import s3Client from 'your s3 config file';

const pdfGenerator = () => {
  const doc = new PDFDocument();
  doc.text('Hello, World!');
  doc.end();
  return doc;
}

const uploadFile = async () => {
  const pdf = pdfGenerator();
  const pdfBuffer = await getStream.buffer(pdf)

  await s3Client.send(
    new PutObjectCommand({
      Bucket: 'bucket-name',
      Key: 'filename.pdf',
      Body: pdfBuffer,
      ContentType: 'application/pdf',
    })
  );
}

uploadFile()

Upvotes: 1

Alan
Alan

Reputation: 10125

My code to return a base64 for pdfkit:

import * as PDFDocument from 'pdfkit'
import getStream from 'get-stream'

const pdf = {
  createPdf: async (text: string) => {
    const doc = new PDFDocument()
    doc.fontSize(10).text(text, 50, 50)
    doc.end()

    const data = await getStream.buffer(doc)
    let b64 = Buffer.from(data).toString('base64')
    return b64
  }
}

export default pdf

Upvotes: 0

TroyWolf
TroyWolf

Reputation: 366

An updated answer for 2020. There is no need to introduce a new memory stream because "PDFDocument instances are readable Node streams".

You can use the get-stream package to make it easy to wait for the document to finish before passing the result back to your caller. https://www.npmjs.com/package/get-stream

const PDFDocument = require('pdfkit')
const getStream = require('get-stream')

const pdf = () => {
  const doc = new PDFDocument()
  doc.text('Hello, World!')
  doc.end()
  return await getStream.buffer(doc)
}


// Caller could do this:
const pdfBuffer = await pdf()
const pdfBase64string = pdfBuffer.toString('base64')

You don't have to return a buffer if your needs are different. The get-stream readme offers other examples.

Upvotes: 20

A tweak of @bolav's answer worked for me trying to work with pdfmake and not pdfkit. First you need to have memorystream added to your project using npm or yarn.

const MemoryStream = require('memorystream');
const PdfPrinter = require('pdfmake');
const pdfPrinter = new PdfPrinter();
const docDef = {};
const pdfDoc = pdfPrinter.createPdfKitDocument(docDef);
const memStream = new MemoryStream(null, {readable: false});
const pdfDocStream = pdfDoc.pipe(memStream);
pdfDoc.end();
pdfDocStream.on('finish', () => {
  console.log(Buffer.concat(memStream.queue);
});

Upvotes: 0

josh3736
josh3736

Reputation: 144832

There's no need to use an intermediate memory stream1 – just pipe the pdfkit output stream directly into a HTTP upload stream.

In my experience, the AWS SDK is garbage when it comes to working with streams, so I usually use request.

var upload = request({
    method: 'PUT',
    url: 'https://bucket.s3.amazonaws.com/doc.pdf',
    aws: { bucket: 'bucket', key: ..., secret: ... }
});

doc.pipe(upload);

1 - in fact, it is usually undesirable to use a memory stream because that means buffering the entire thing in RAM, which is exactly what streams are supposed to avoid!

Upvotes: 4

bolav
bolav

Reputation: 6998

You could try something like this, and upload it to S3 inside the end event.

var doc = new pdfkit();

var MemoryStream = require('memorystream');
var memStream = new MemoryStream(null, {
   readable : false
});

doc.pipe(memStream);

doc.on('end', function () {
   var buffer = Buffer.concat(memStream.queue);
   awsservice.putS3Object(buffer, fileName, fileType, folder).then(function () { }, reject);
})

Upvotes: 2

Related Questions