uxp100
uxp100

Reputation: 3

Is modifying sys.path the best way to handle a project specific module

I'm (re)writing in python part of the testing tools supporting a large multi-language project, which are stored along side the project itself. At some point it becomes obvious that some code can be refactored out into its own package. But where should this be stored to be shared among the python tools?

In the python path, there is a company wide python-lib directory, but that shares with thousands what really is only important to tens of people, and more importantly, is not project specific.

For c test tools, we use LD_LIBRARY_PATH to point to our test libs, either pointing to our own build of the libs, or to some automated build output. I can modify sys.path to include any directory I want, and behave kinda like LD_LIBRARY_PATH, except easier for my teammates.

This doesn't seem pythonic. I am aware of virtualenv, though not well informed, but I have these concerns:

Upvotes: 0

Views: 52

Answers (1)

ShadowRanger
ShadowRanger

Reputation: 155497

If this is solely for testing purposes, updating your PYTHONPATH while running test code would be roughly the Python equivalent to updating LD_LIBRARY_PATH for testing C code. In much the same way LD_LIBRARY_PATH pushes some directories to the front of the shared object lookup path, PYTHONPATH pushes specific directories to the front of sys.path, and does so from the moment Python begins running (so you know there aren't any weird site triggered imports that might take place before you have time to update sys.path in your main module).

Using it for production is frowned upon (among other things, the same environment variable is read by both Python 2 and 3, so it can cause problems if any code at that location isn't compatible with both versions), but for test code, it's no more unreasonable than tweaking LD_LIBRARY_PATH.

Virtual environments might work, but only if you could somehow publish the virtualenv company-wide; they store full copies of their local libraries, and (by default) prevent access to otherwise site-wide installed packages (to provide a clean environment). A test-centered virtualenv might want to pass the switch that allows access to system modules so it acts as a supplement to the system, not a replacement.

Activating virtualenvs in bash-like shells is simply a matter of running source /path/to/virtualenv/bin/activate, while deactivating them is just running deactivate (it's added to your shell as a function when the activate script is sourceed). They're generally safer than modifying PYTHONPATH (among other things, they use version-specific sub-directories for each major.minor version of Python, so you won't accidentally run 3.6 specific code on 2.7), but you do need to write your test code as real packages (with setup.py files and all) to manage them properly. I personally think this is worth it (you'll need to learn the Python packaging mechanisms eventually), but it is a higher initial skill bar.

Upvotes: 1

Related Questions