Reputation: 3
Thing is, I'm using the information on this link combined with some tips about http posting in order to get some data of an Extractor in import.io. It would be MUCH better if I were able to get the data by Bulk Extracting instead of creating a loop and sending thousands of slow petitions.
In fact to get any data at all, in the "GET /store/connector/{id}/_query", I have to use webpage/url:link in the "input" section.
I've been thinking if there is any command that let me do a bulk extraction? I do have the same question about being able to fire a HTTP request in crawlers with advance settings, but so far it's not that necessary.
Thanks!
Upvotes: 0
Views: 109
Reputation: 610
There does not seem to be a method for doing bulk extract via the API.
Bulk extract works by doing what you mention above (looping and sending thousands of slow requests) but in the UI. It does not store it anywhere but in your browser until you download it.
For your second question; crawler data CAN be sent via HTTP if you are using the commandline-crawler. Look at this answer
Upvotes: 0