Reputation: 18530
I've already been using pip and virtualenv (and actually sometimes still prefer a well organized combination through an SVN repository, wise usage of svn:externals, and dynamic sys.path).
But this time for a new server installation I'd like to do things in the right way.
So I go to the pip installation page and it says:
The recommended way to use pip is within virtualenv, since every virtualenv has pip installed in it automatically. This does not require root access or modify your system Python installation. [...]
Then I go to the virtualenv installation page and it suggests:
You can install virtualenv with pip install virtualenv, or the latest development version with pip install virtualenv==dev. You can also use easy_install [...]
And pip is supposed to replace easy_install, of course :)
Granted, they both explain all alternative ways to install.
But... which one should go first? And should I favor systemwide pip or not?
I see a main reason to ponder over, but there might be others
If I want everybody to have a virtual env available I might just install a system wide pip (eg. with ubuntu do sudo aptitude install python-pip
then use it to install virtualenv sudo pip install virtualenv
).
edit another reason to ponder over: virtualenvwrapper install instructions (but not the docs) say:
Note In order to use virtualenvwrapper you must install virtualenv separately.
not exactly sure what "separately" mean there (i never noticed).
Otherwise, which one should go first, and does it really make a difference or not?
The closest question (and answers) is the first of the following (in particular see @elarson answer), the second looks overly complicated:
but I feel it all fail at answering my question in full: systemwide vs. local, but also should pip or virtualenv go first (and why do they send each one to the other to start with!!!)
Upvotes: 7
Views: 2776
Reputation: 18530
Not decided yet on the definitive solution, but that's what's going on right now partly based on @nutjob comments (no, I haven't switched to buildout for the time being but I will take some time for it later on!)
I have a big, powerful server with quite a lot of django applications. I mostly use gunicorn + supervisord.
These requirements dictate the following, but slightly different settings will probably make it different.
python virtualenv.py VIRTUALENVNAME
; I don't install virtualenv wrapperThis means each user has is own, single virtualenv.
Should I start all over I would probably move to a buildout or similar solution (I'm not a big fan of what happens when you mix supervisor and virtualenv), but I'm right now pretty happy with this one...
Upvotes: 1
Reputation: 1399
tl;dr Answer would be VirtualEnv first. You can have two of them each for Python version 2.x and 3.x
[edit]
I am really doubtful if installing (there is no install, you merely download and execute a script) VirtualEnv system wide/per-user even matters. The whole point of Using VirtualEnv is to create isolated development sandboxes so that the libraries from one project doesn't conflict with each other. For example you can a Python 2.x project using Beautiful-soup Version < 4.x and A Python 3.x project Using Beautiful-soup Version 4.0 in two different Virtual Environments.
How you get VirtualEnv script on your system doesn't really matter, and since once you have it and pip is self contained within VirtualEnv, it just makes sense to get VirtualEnv first. Also once you are in with python, you would have many projects, and for each, the recommended way would be to have a Virtual Environment, and then install dependencies via pip. You can later do "pip freeze > requirements.txt" and then a "pip install requirements.txt" to simply replicate your exact libraries across two systems [say dev and production] and so on...
Upvotes: 6
Reputation: 51
Half of one, six dozen of another. (Let that sink in. Ha ha.)
But more seriously, do you honestly have multiple users on your system? These days, even Linux hosts tend to be for a single user, and where there are multiple user IDs they tend to be servers that run multiple processes under various quarantined user IDs. Given that, making life easier for all users isn't quite so relevant.
On the other hand, multiple services each using Python may have conflicting requirements, rare as it may be that it boils down to even a required version of pip
. Given that, I'd tend to prefer a global installation of virtualenv in order to make pristine quasi-installations of Python.
Yet I'd like to point out one other idea: Buildout, http://www.buildout.org/
Buildout does the same thing as virtualenv but by taking a remarkably different approach. You write a buildout configuration file (buidout.cfg
) that lists what your various eggs are and how they'll be connected, and specify settings of "buildout recipes" that set up specialized situations (like a Django deployment, a Buildbot server, a Plone website, a Google app engine app, etc.).
You then use your system Python to bootstrap the buildout, run it, and it generates an isolated setup—like a virtualenv.
But the best part: it's repeatable. You can take the same buildout.cfg
to another host and get the same setup. That's much harder to do with a virtualenv!
Upvotes: 1