Reputation: 11
I am trying to extract a summary part from a Wikipedia url by retrieving a JSON response using the below query : https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles=Stack%20Overflow. I would then like to parse this response and display the summary within html of my website. I've found an example of JSON parsing which seems to be working when testing on a string in JSON. I've also tried re-cycling a JSON retrieval function but being a complete novice i am clearly missing something because the result of this query does not return any data (the result of the below used in html editor returns blank page) - could anyone suggest a correction which would allow me to retrieve the wikipedia data and parse it ?
CODE :
<html>
<body>
<p id="demo"></p>
<script>
var getJSON = function(url, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'json';
xhr.onload = function() {
var status = xhr.status;
if (status === 200) {
callback(null, xhr.response);
} else {
callback(status, xhr.response);
}
};
xhr.send();
};
resp=getJSON('https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles=Stack%20Overflow')
var obj = JSON.parse(resp);
obj.extract = eval("(" + obj.extract + ")");
document.getElementById("demo").innerHTML = obj.extract;
</script>
</body>
</html>
**expected JSON response ** desired output, text after "extract"
{"batchcomplete":"","query":{"pages":{"21721040":{"pageid":21721040,"ns":0,"title":"Stack Overflow","extract":"Stack Overflow is a question and answer site for professional and enthusiast programmers. It is a privately held website, the flagship site of the Stack Exchange Network, created in 2008 by Jeff Atwood and Joel Spolsky. It features questions and answers on a wide range of topics in computer programming. It was created to be a more open alternative to earlier question and answer sites such as Experts-Exchange. The name for the website was chosen by voting in April 2008 by readers of Coding Horror, Atwood's popular programming blog.The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down and edit questions and answers in a fashion similar to a wiki or Reddit. Users of Stack Overflow can earn reputation points and \"badges\"; for example, a person is awarded 10 reputation points for receiving an \"up\" vote on an answer given to a question and 10 points for the \"up\" vote of a question, and can receive badges for their valued contributions, which represents a gamification of the traditional Q&A site. Users unlock new privileges with an increase in reputation like the ability to vote, comment, and even edit other people's posts. All user-generated content is licensed under a Creative Commons Attribute-ShareAlike license.Closing questions is a main differentiation from Yahoo! Answers and a way to prevent low quality questions. The mechanism was overhauled in 2013; questions edited after being put \"on hold\" now appear in a review queue. Jeff Atwood stated in 2010 that duplicate questions are not seen as a problem but rather they constitute an advantage if such additional questions drive extra traffic to the site by multiplying relevant keyword hits in search engines.As of January 2019 Stack Overflow has over 10 million registered users, and it exceeded 16 million questions in mid 2018. Based on the type of tags assigned to questions, the top eight most discussed topics on the site are: JavaScript, Java, C#, PHP, Android, Python, jQuery and HTML.Stack Overflow also has a Jobs section to assist developers in finding their next opportunity. For employers, Stack Overflow provides tools to brand their business, advertise their openings on the site, and source candidates from Stack Overflow's database of developers who are open to being contacted.\n\n"}}}}
Upvotes: 0
Views: 80
Reputation: 7464
In your code, getJSON
takes two arguments: url
and callback
. You are providing the first, but not the second. What you need to do is put your code after the getJSON
invocation into a callback function, like this:
var url = 'https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles=Stack%20Overflow';
resp = getJSON(url, afterGet);
function afterGet(error, resp) {
var obj = JSON.parse(resp);
obj.extract = eval("(" + obj.extract + ")");
document.getElementById("demo").innerHTML = obj.extract;
}
The reason for this is that the XMLHttpRequest
happens asynchronously. You call it, but it won't be done by the time the next line of code runs. So you tell it to run the code later, by giving it a callback
function.
Promises are also a good way to delay code execution until after an asynchronous action happens. If you want to learn more about Promises, you might experiment with using fetch
instead of XMLHttpRequest
.
Upvotes: 1