Reputation: 7364
I have my rewrite rule below but I can not understand why my second rule is not working. When I disable the first, it will work properly. Is there a limitation if the url of two rules are exactly the same? Take note that in my first condition I am trying to map a request to NuxtJS if a Baidu spider crawls our website otherwise just serve the static HTML files on the wwwroot directory
<?xml version="1.0" encoding="UTF-8"?>
<rules>
<clear />
<rule name="ReverseProxyInboundRule1" enabled="true" stopProcessing="false">
<match url="(.*)" />
<action type="Rewrite" url="http://localhost:3000/{R:1}" />
<conditions>
<add input="{HTTP_USER_AGENT}" pattern="^((?Baidu).)*$" />
</conditions>
</rule>
<rule name="StaticHTMLForBaiduCrawler" enabled="true" stopProcessing="false">
<match url="(.*)" />
<conditions>
<add input="{HTTP_USER_AGENT}" pattern="^((?!Baidu).)*$" />
</conditions>
<action type="Rewrite" url="{R:1}/index.html" />
</rule>
</rules>
Upvotes: 0
Views: 104
Reputation: 16950
There's no such a limitation.
Your first pattern was syntactically invalid, so an error occurs.
The expression "^((?Baidu).)*$" has a syntax that is not valid.
Since you don't need an exact match, a simple pattern like Baidu
just works. Take a look at the following rules and note the negate="true"
.
<rules>
<clear />
<rule name="ReverseProxyInboundRule1" enabled="true" stopProcessing="true">
<match url="(.*)" />
<action type="Rewrite" url="http://localhost:3000/{R:1}" />
<conditions>
<!-- if user agent contains Baidu -->
<add input="{HTTP_USER_AGENT}" pattern="Baidu" />
</conditions>
</rule>
<rule name="StaticHTMLForBaiduCrawler" enabled="true" stopProcessing="true">
<match url="(.*)" />
<conditions>
<!-- if user agent does not contain Baidu -->
<add input="{HTTP_USER_AGENT}" pattern="Baidu" negate="true" />
</conditions>
<action type="Rewrite" url="{R:1}/index.html" />
</rule>
</rules>
Upvotes: 1