Reinaldo Chaves
Reinaldo Chaves

Reputation: 995

Is there any way to open a 10GB file in Colaboratory?

In Colaboratory, in Python3, I enabled Runtime-Change runtime type for GPU

Then I made this code:

import pandas as pd
import numpy as np

# Code to read csv file into Colaboratory:
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)

#Link of a 10GB file in Google Drive
link = ''

fluff, id = link.split('=')
print (id) # Verify that you have everything after '='

downloaded = drive.CreateFile({'id':id}) 
downloaded.GetContentFile('empresa.csv') 

But I can not open the file, due to lack of memory: Your session crashed after using all available RAM

I have:

Connected to “Python 3 Google Compute Engine backend (GPU)” RAM: 0.64 GB/12.72 GB Disk: 25.14 GB/358.27 GB

Please, is there any way to increase RAM in Colaboratory?

Free or Paid

-/-

I have tried in an alternate way, with mount your Drive as a filesystem

from google.colab import drive
drive.mount('/content/gdrive')

with open('/content/gdrive/My Drive/foo.txt', 'w') as f:
  f.write('Hello Google Drive!')
!cat /content/gdrive/My\ Drive/foo.txt

# Drive REST API
from google.colab import auth
auth.authenticate_user()

# Construct a Drive API client
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')

# Downloading data from a Drive file into Python
file_id = ''

import io
from googleapiclient.http import MediaIoBaseDownload

request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
  # _ is a placeholder for a progress object that we ignore.
  # (Our file is small, so we skip reporting progress.)
  _, done = downloader.next_chunk()

downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))

But the problem continues: Your session crashed after using all available RAM

Upvotes: 1

Views: 950

Answers (2)

Bob Smith
Bob Smith

Reputation: 38704

My recommendation would be to mount your Drive as a filesystem rather than attempting to load the file entirely into memory.

Then, you can read the CSV directly from the filesystem one chunk at a time, incrementally.

Upvotes: 1

StephenG
StephenG

Reputation: 2881

Can always connect to a local backend

Upvotes: 1

Related Questions