E. J. Winkleberry
E. J. Winkleberry

Reputation: 149

Implementing hyper::Service on a reference to a struct

I'm trying to make a question and answer server for a huge data structure. The user would send JSON questions to the server and the server would use the huge data structure to answer.

I'm trying to do this by implementing the hyper::server::Service trait for my Oracle struct.

I've got something like this:

use self::hyper::server::{Http, Service, Request, Response};
// ...other imports

struct Oracle { /* Tons of stuff */}

impl<'a> Service for &'a Oracle {
    type Request = Request;
    type Response = Response;
    type Error = hyper::Error;
    type Future = Box<Future<Item = Self::Response, Error = Self::Error>>;

    fn call(&self, req: Request) -> Self::Future {
         match (req.method(), req.path()) {
            // could be lots of question types
            (&hyper::Method::Post, "/query") => {
                Box::new(req.body().concat2().map(|b| {
                    let query: Query = deserialize_req(&b.as_ref());
                    let ans = get_answer(&self, &query);
                    Response::new()
                        .with_header(ContentLength(ans.len() as u64))
                        .with_body(ans)
                }))
            },
            _ => {
                let response = Response::new()
                    .with_status(hyper::StatusCode::NotFound);
                Box::new(futures::future::ok(response))
            },
        }
    }
}

This causes lifetime problems (cannot infer an appropriate lifetime due to conflicting requirements) when I try to put &self in a future.

My inclination is that this is totally the wrong way to approach this problem, but I'm having a hard time figuring out the best way to do this.

Upvotes: 0

Views: 791

Answers (1)

ArtemGr
ArtemGr

Reputation: 12567

Note that these futures are going to be compute-intensive, it would make sense to run them on a CPU pool and avoid running them on the asynchronous single-threaded Tokio Core stack.

The &self in the call is a reference to a memory managed by the call site. The call site might free that memory right after the call, or at some other time we don't control, therefore saving the reference in the closure ("closing over the reference") for some later use is wrong.

In order to manage the memory in a way that lends better to sharing you'd often use a reference-counting pointer. The Oracle memory will then be owned by the reference-counting pointer rather than the call site, allowing you to freely share the Oracle with closures and threads.

If you want to process these futures in parallel, you'd need a thread-safe reference-counting pointer, such as Arc.

To use the Arc, you could turn the call into a free function:

fn call(oracle: Arc<Oracle>, req: Request) -> OracleFuture

Or use a trait to implement the call on the pointer:

struct Oracle { /* Tons of stuff */}

type OraclePt = Arc<Oracle>;

trait OracleIf {
  fn call(&self, req: Request) -> Self::Future
}

impl OracleIf for OraclePt {
  fn call(&self, req: Request) -> Self::Future {
    ...
            let oracle: OraclePt = self.clone();
            Box::new(req.body().concat2().map(move |b| {  // Close over `oracle`.
                let query: Query = deserialize_req(&b.as_ref());
                let ans = get_answer(&*oracle, &query);
                Response::new()
                    .with_header(ContentLength(ans.len() as u64))
                    .with_body(ans)
            }))
  }
}

We close over the copy of the reference-counting pointer here.

If you don't like the idea of using the reference-counting pointers, then another option is to use a "scoped thread pool", a thread pool that guarantees that the child threads are terminated before the parent thread, making it possible to safely share the Oracle reference with the child threads.
It might be easier to do the latter without wrapping the computation in a Future.

Upvotes: 2

Related Questions