Reputation: 35
I am working in a chatbot with Watson Assistant and Discovery but I can't receive any answer from Discovery
this is the dialog when I used the discovery intent
and this is the output from the UI that I designed
function updateMessage(res, input, response) {
// if (!response.output) {
// response.output = {};
// } else
if (response.output.action === 'callDiscovery') {
// if you want to use natural_language_query, set here the input from the user like my example:
params.natural_language_query = response.input.text || null;
console.log('Calling discovery');
discovery.query(params, (error, returnDiscovery) => {
if (error) {
next(error);
} else {
console.log('return from discovery: '+returnDiscovery);
//if you want to send all text returned from discovery, discomment these lines
var text;
for (i = 0; i < returnDiscovery.results.length; i++) {
text += returnDiscovery.results[i].text + "<br>";
}
//sending the TEXT returned from discovery results
response.output.text = 'Discovery call with success, check the results: <br>' + text; //results */
//sending the PASSAGES returned from discovery results
response.output.text = 'Discovery call with success, check the results: <br>' + returnDiscovery.passages[0].passage_text; //passageResults
return res.json(response);
};
});
} else if (response.output && response.output.text) {
return res.json(response);
}
}
Upvotes: 0
Views: 196
Reputation: 323
The tokenizer argument in the TfidfVectorizer
is for overriding the string tokenization step. For e.g. you can use a function as shown below which accepts string as argument, tokenizes the string and return tokenized words.
def tokenizerFunc(x):
return x.split()
This function accepts string as input and returns list of words. The reason you get the error "init() takes 1 positional argument but 2 were given" is because function 'WordNetLemmatizer()' does not accept any argument, however when using this function within TfidfVectorizer
string is passed as an argument for it to be tokenized.
In case you want to lemmatize and tokenze simultaneously you can use this function below
lemmatizer = WordNetLemmatizer()
def tokenizerFunc(x):
tokenizedList = x.split()
lemmatizedList = [ lemmatizer.lemmatize(i) for i in tokenizedList]
return lemmatizedList
you have to use it like this
TFIDF = TfidfVectorizer(tokenizer=tokenizerFunc,analyzer= 'word',min_df=3,token_pattern=r'(?u)\b[A-Za-z]+\b',stop_words= 'english')
tfidf_matrix = TFIDF.fit_transform(df2['job_title'])
Upvotes: 1
Reputation: 2868
tokenizer should be lemmatizer.lemmatize and not lemmatizer
lemmatizer = WordNetLemmatizer()
TFIDF = TfidfVectorizer(tokenizer=lemmatizer.lemmatize,analyzer= 'word',min_df=3,token_pattern=r'(?u)\b[A-Za-z]+\b',stop_words= 'english')
output
TFIDF.fit_transform(['how are you', 'facing issue','hope this well help you' ])
#o/p
<3x3 sparse matrix of type '<class 'numpy.float64'>'
with 9 stored elements in Compressed Sparse Row format>
Upvotes: 2