RobMasters
RobMasters

Reputation: 4148

Deploying my Symfony2 app with Capifony has started breaking the live release's cache

This could be a bit of a niche issue but I'm hoping somebody can help. This was always working fine until yesterday when I was attempting to push some changes to production, but since then the last 3 deployments have all temporarily broken the live site. Here's one of the exceptions from the logs:

[2012-12-18 12:12:16] request.CRITICAL: Exception thrown when handling an exception (InvalidArgumentException: The directory "/path/to/app/releases/20121217134758/app/cache/prod/jms_diextra/metadata" is not writable.) [] []
[2012-12-18 12:12:18] request.CRITICAL: InvalidArgumentException: The directory "/path/to/app/releases/20121217134758/app/cache/prod/jms_diextra/metadata" is not writable. (uncaught exception) at /path/to/app/releases/20121217134758/vendor/jms/metadata/src/Metadata/Cache/FileCache.php line 17 [] []

I don't understand why the cache directory of the previous release (current before deploying) should be affected though! Here is where it happens in my deployment:

--> Updating code base with remote_cache strategy
--> Creating cache directory...........................✔
--> Creating symlinks for shared directories...........✔
--> Creating symlinks for shared files.................✔
--> Normalizing asset timestamps.......................✔
Do you want to copy last release vendor dir then do composer install ?: (y/N)
y
--> Copying vendors from previous release..............✔
--> Downloading Composer...............................✔
--> Updating Composer dependencies..................... BREAK HAPPENS HERE OR SOON BEFORE

As you can see, my cache directory isn't even shared between deployments:

# in deploy.rb

set :shared_files,      ["app/config/parameters.yml"]
set :shared_children, [app_path + "/logs", web_path + "/uploads", web_path + "/videos", app_path + "/spool"]

Thankfully I was ready for it after the first time and had an ssh console sat there with a sudo chmod -R 0777 app/cache/ app/logs/ ready to be fired, but this isn't exactly a permanent solution.

NOTE: I'm currently handling permissions of cache/log directories as a custom post-deploy hook:

# in deploy.rb

after "deploy:finalize_update" do
  # Ensure htaccess points to app.php and not app_dev.php
  run "sed -i 's/app_dev/app/' #{latest_release}/#{web_path}/.htaccess"

  # Use a unique APC prefix to guarantee there are no clashes
  run "sed -i 's/_VERSION/_#{release_name}/' #{latest_release}/#{web_path}/app.php"

  # Set permissions of all 'writable_dirs' using sudo
  pretty_print "--> Setting permissions"
  dirs = []
  writable_dirs.each do |link|
    if shared_children && shared_children.include?(link)
      absolute_link = shared_path + "/" + link
    else
      absolute_link = latest_release + "/" + link
    end
    dirs << absolute_link
  end
  sudo sprintf("chmod -R 0777 %s", dirs.join(' '))
end

Update

During my latest deployment, I noticed the exceptions started occurring at a later point, so it isn't anything to do with the dependencies. I suspect the cause of this could be when a cron is executed that calls the current version's console and then obviously affects the cache. This would make sense as I only set the cron live recently.

But I'm not sure how to resolve this. Looking at the Setting up permissions section in the docs, it appears that there could be a couple of options. I don't know anything about setfacl so I'd be worried about breaking something. Would using the umask option be a good idea?

Upvotes: 3

Views: 1340

Answers (1)

RobMasters
RobMasters

Reputation: 4148

I ended up going for the umask option as I mentioned in the update. Although, as I figured it was a problem caused by the console, I only uncommented the umask(1000); line in app/console - not web/app.php or web/app_dev.php. The problem hasn't occurred for the few deployments I've made since making this change so I guess it's done the trick.

Upvotes: 1

Related Questions