Mpizos Dimitris
Mpizos Dimitris

Reputation: 5001

Importing from local directory in python

I am a bit confused about why the error happens for the following case:

I got the following project:

home/projects/project1

and:

user:home/projects/project1$ ls
file1.py
file2.py
__init__.py
data

where data is a folder with some files.

And I do the following:

from os import path
import sys
sys.path.append(path.abspath('/home/projects/project1'))    

from file1 import function1

That works. Also import file2 works fine.

file1 is depentant of some fuctions of file2. Such as file1 starts with:

from file2 import function2

So when I call:

res = function1(some_input)

I get the following error:

File "/home/dimitris/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 434, in loads
return pickle.loads(obj)
ImportError: No module named file2

Why is this happening?

EDIT

Not sure if its relevant but I am making this module and trying to import it in apache-zeppelin notebook to use them there. And there is where I get the error.

It seems that it could be related to this question: Pyspark --py-files doesn't work

Upvotes: 1

Views: 763

Answers (2)

rooger
rooger

Reputation: 135

You can do like this:

file1.py:

def hello():
    print('hello from file1')

file2.py:

import file1 as f
def get_hello():
    return f.hello()

Upvotes: 0

Chen A.
Chen A.

Reputation: 11328

You probably have the import file2 statement at the beginning of file1.py. If you would import file1, it would load the whole module, but since you import just a function - you miss the dependency.

You can fix it by either import file1 or by adding import file2 statement at the beginning of function1 definition. E.g,

# file1.py
def function1(some_input):
    import file2
    .. your code ..

Upvotes: 1

Related Questions