Reputation: 287400
For one of my projects I'm getting this exception every now and then:
ActionView::MissingTemplate: Missing template blogs/index with {:handlers=>[:rxml, :erb, :builder, :rjs, :haml, :rhtml], :formats=>["image/jpeg", "image/pjpeg", "image/png", "image/gif"], :locale=>[:en, :en]} in view paths "/var/www/keeponposting/releases/20110403083651/app/views"
It seems someone is requesting an image from a URL that isn't an image:
HTTP_ACCEPT "image/jpeg, image/pjpeg, image/png, image/gif"
Any ideas what to do about it? Do I have to implement a handler for one of those and return "" to get rid of this exceptions or is there a better way to handle it?
Now I'm also getting this:
ActionView::MissingTemplate: Missing template blogs/index with {:formats=>["text/*"], :handlers=>[:rjs, :haml, :rhtml, :erb, :rxml, :builder], :locale=>[:en, :en]} in view paths "/var/www/keeponposting/releases/20110415040109/app/views"
Isn't there a way to send back HTML no matter what format is requested?
Upvotes: 15
Views: 5192
Reputation: 6315
Add formats: [:html]
to render:
def action
render formats: [:html]
end
Upvotes: 0
Reputation: 1799
I did this: In my controller, I put a before filter:
def acceptable_mime_type
unless request.accepts.detect{|a| a == :json || a == :html} || request.accepts.include?(nil)
if request.accepts.detect{|a| a.to_s().include?("*/*")}
ActionDispatch::Request.ignore_accept_header = true
else
render text: "Unacceptable", status: 406
false
end
end
end
It checks for my supported types (e.g. json, html) and nil (nil renders default html), then if those mime types aren't supported, it checks for "/" in the header. If it finds it, I force rails to render the default mime type by ignoring the accept header.
To test this in rspec, I had to do this:
describe 'bad header' do
describe 'accept' do
let(:mimeTypes) { ["application/text, application/octet-stream"] }
it 'should return a 406 status code' do
request.accept = mimeTypes
get 'index'
expect(response.response_code).to eq 406
end
describe 'with */* included' do
it 'should return a 200 status code' do
request.accept = ["*/*"] + mimeTypes
get 'index'
expect(response.response_code).to eq 200
end
end
end
end
For some reason, I was getting problems trying to send accept headers properly in rspec using the methods described here, but I found if I set request.accept to an Array, both my tests passed. Weird, I know, but I'm moving on to the next issue for now.
Upvotes: 0
Reputation: 17528
Here is a stricter response; a suggestion by purp, from the discussion on issue 4127.
class FooController
rescue_from ActionView::MissingTemplate, :with => :missing_template
def missing_template
render :nothing => true, :status => 406
end
end
Upvotes: 6
Reputation: 16274
I fixed this issue (a few minutes ago – so far, so good) with this new Rails 3.1 option:
config.action_dispatch.ignore_accept_header = true
As mentioned in this Rails issue. That goes in config/application.rb
.
I tested it like so in a RSpec request test (using Capybara):
it "should not break with HTTP_ACCEPT image/*;w=320;h=420 from iPhone" do
page.driver.header "Accept", "image/*;w=320;h=420"
visit "/some/path"
page.should have_content("Some content")
end
Upvotes: 8
Reputation: 4184
I agree about blocking the offending robot, but if you really want to force the response format add a before_filter
and set request.format = :html
, like this:
before_filter :force_request_format_to_html
private
def force_request_format_to_html
request.format = :html
end
Upvotes: 13
Reputation: 4747
I would be tempted to rescue the MissingTemplate in your application controller and log the Referrer header to see what's triggering this request. You never know, it might be some obscure part of your own app!
If on the other hand you're confident this is being caused by a robot, have you considered adding the offending URL to your robots.txt file? For example:
User-Agent: YandexImages
Disallow: /your/failed/path
Replacing 'your/failed/path' with the path that the robot is stumbling over. If the robot is struggling all over the place, you could just disallow access to the whole site for that particular robot:
User-Agent: YandexImages
Disallow: /
I think this is a cleaner and lighter approach than implementing a handler specifically to suppress errors from a seemingly badly behaved bot.
Upvotes: 5