Reputation: 21
I'm sorry if this has been asked before, or it's a really basic C question - I am more used to creating websites, in which case you can just use JS's fetch function to make network requests asynchronously:
fetch().then(resp => {
/* Deal with response without blocking code execution or UI rendering */
});
However, I was looking into creating a C app with CLAY, and since CLAY uses a 3rd party renderer like SDL or Raylib, I decided to get somewhat familiar with that first. I choose SDL2 as it seemed like the simpler to learn. Firstly I create a window checking if SDL initialization or window creation fails:
int sdlCode = SDL_Init(SDL_INIT_VIDEO);
if (sdlCode < 0) {
printf("Could not initialize SDL: %s :(", SDL_GetError());
return sdlCode;
}
SDL_Window *win = SDL_CreateWindow(
"SDL Tutorial",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
800,
600,
SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE
);
if (win == NULL) {
printf("Could not create SDL window: %s :(", SDL_GetError());
return -1;
}
Then I create a simple application loop to handle only resize or quit events, redrawing a surface with a white background for the window on resize:
SDL_Surface *screenSurface;
SDL_Event e;
SDL_Rect r;
int w, h;
bool repaint = true;
do {
SDL_PollEvent(&e);
if (repaint) {
screenSurface = SDL_GetWindowSurface(win);
SDL_GetWindowSize(win, &w, &h);
r = (SDL_Rect) {0, 0, w, h };
SDL_FillRect(screenSurface, &r, 0xFFFFFF);
SDL_UpdateWindowSurface(win);
}
repaint = e.type == SDL_WINDOWEVENT && e.window.event == SDL_WINDOWEVENT_RESIZED;
} while (e.type != SDL_QUIT);
Finally I destroy the window & clean up on quit:
SDL_DestroyWindow(win);
SDL_Quit();
return 0;
I could obviously add more content to the app using SDL or a layout engine like CLAY. But a problem arises when I want to get data (e.g. in JSON format) from an online API when a user action / event is triggered. Let's just fetch https://httpbin.org/delay/2 when the app is resized as a simple example. I know this is easy to do with libcurl's curl_easy_perform - however, unlike javascript's fetch function, libcurl blocks the UI loop
curl_easy_perform - perform a blocking network transfer
// [...]
CURL *curlCtx = curl_easy_init();
curl_easy_setopt(curlCtx, CURLOPT_URL, "https://httpbin.org/delay/2");
curl_easy_setopt(curlCtx, CURLOPT_WRITEFUNCTION, fwrite);
curl_easy_setopt(curlCtx, CURLOPT_WRITEDATA, stdout);
CURLcode resp;
do {
// [...]
if (repaint) {
resp = curl_easy_perform(curlCtx);
if (resp != CURLE_OK) printf("Curl request failed: %s :(", curl_easy_strerror(resp));
// [...]
}
// [...]
} while (e.type != SDL_QUIT);
curl_easy_cleanup(curlCtx);
// [...]
return 0;
As expected, when first rendering & when resizing the window there's a delay before the background is painted white, during which the window appears "glitched". I expect the same to occur in a complex app when e.g. a TODO list is displayed when the user clicks a button.
When searching online all results are about how to use curl_multi_perform to perform many requests concurrently, or how to create multiplayer games in SDL. The only solution I could think of is using SDL Multithreading. But I wonder if using multiple threads (which have heavy overhead) is really necessary to solve such a simple problem - as per the documentation:
Here's the important part: a poorly made multithreaded program can perform worse than a single threaded program. Much worse. The fact is that multithreading inherently adds more overhead because threads then have to be managed. If you do not know the costs of using different multithreading tools, you can end up with code that is much slower than its single threaded equivalent.
...which makes me question - how does JS fetch even work? Does it spawn a seperate thread under the hood, or is there a way (an OS syscall) to make requests and only call a function when a response has been returned? Thanks in advance!
Upvotes: 2
Views: 54
Reputation: 15693
To answer the title question: You can do this by using curl_multi_perform
instead of curl_easy_perform
. As the name implies, this also allows you to make multiple requests "at the same time", but the important part for our usecase here is that it's nonblocking.
Here is an example:
#include <curl/curl.h>
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
int main(void) {
CURL *request;
CURLM *multi;
int running_handles = 1;
// for the ...
setbuffer(stdout, NULL, 0);
request = curl_easy_init();
assert(request != NULL);
multi = curl_multi_init();
assert(multi != NULL);
curl_easy_setopt(request, CURLOPT_URL, "https://httpbin.org/delay/2");
curl_multi_add_handle(multi, request);
const char *dots[] = {" ", ". ", ".. ", "..."};
int dots_ix = 0;
while(running_handles > 0) {
CURLMcode perform_c = curl_multi_perform(multi, &running_handles);
assert(perform_c == 0);
if(running_handles > 0) {
/* print something here to demonstrate we're not blocked */
printf("\r%s", dots[dots_ix]);
dots_ix = (dots_ix + 1) % (sizeof(dots)/sizeof(const char*));
usleep(300000);
}
}
printf("DONE\n");
return 0;
}
The question about JS fetch()
is a wholly separate topic which you should ask about separately, but the short (and not very useful) answer is that fetch()
is an API; There's not just one single "correct" implementation for it and in fact you'll find different JS implementations on different platforms won't all work exactly the same way.
On Linux there's multiple options, you can look at the epoll documentation to see one way of doing this. On Windows the platform also provides asynchronous APIs. In the case of Linux, you can also check the source code if you're interested in how this works inside the kernel. There are also mid-level libraries like libuv which abstract over the platform-specific APIs (which goes along with the fetch()
thing as well because at least originally NodeJS used this in their implementation, I don't know if they still do now though).
Just some food for thought: Like many people you think of the blocking versions of IO as the 'natural' way and Async as the 'special' way because that's usually the order in which they're taught; But if you think about it from an implementation perspective it's actually the other way round. I/O is eventually coming from/going to hardware which is usually decoupled from your CPU. Your CPU is not going to stop running just because it told your hard drive to write something. In reality, the implementations are going to use async logic and the sync APIs are wrappers to make things easier for programmers.
Upvotes: 1