sixovov947
sixovov947

Reputation: 215

How to minimise the django search query?

I am planning to implement a live autosearch complete function in django ! But, am belive that am over doing it, so need a suggestion:

Basically this search bar is located in a page, where all at first all the available data's are displayed as a list ! And then when user types something i filter it out using jquery autocomplete.

In the above, am making two queries one to get all data and another one on each user types

As this takes two query, i belive its inefficient

Here is my search function

def search_query(request):
    if 'term' in request.GET:
        qs = University.objects.filter(name__icontains=request.GET.get('term'))
        names = list()
        for data in qs:
            names.append(data.name)
        return JsonResponse(names, safe=False)

And the above is been called each time the user types in the search bar and this is my jquery autocomplete code:

<script>
  $( function() {
    $( "#tags" ).autocomplete({
      source: "{% url 'students:search-query' %}" // this hits the above def
    });
  } );
  </script>

Apart from this, am also making this to show full list as initial data,

def index(request):
    univesity = University.objects.filter(published=True)
    context = {'univesity': univesity}
    return render(request, 'students/search/index.html', context)

is this right way or is there any effcient way ? Can both the queries be combined or please suggest a efficient solution ( approx 50,000 datas to be searched as of now, may increase )

Upvotes: 0

Views: 59

Answers (1)

Resley Rodrigues
Resley Rodrigues

Reputation: 2288

First off, both queries are returning different data. The index returns only published universities while the search returns all.

There are two solutions to your problem. Either one should work. Go with whichever you're more comfortable with (i.e if you write only Python improve the query, if you prefer JS do it on the frontend)

Performing the search on the Frontend

Now since you're returning all the data to the frontend, you can simply give it all to Jquery's autocomplete as a source and let it all happen on the frontend itself.
This can simply be something like

<script>
  var universities = [] // populated either via an API or HTML rendering
  $( function() {
    $( "#tags" ).autocomplete({
      source: universities.map(uni => uni.name),
    });
  });
</script>

OR

<script>
  var universities = [] // populated either via an API or HTML rendering
  function filterUniversities(request, response) {
    var filteredNames = universities.filter(uni => uni.name === request).map(uni => uni.name);
    response(filteredNames);
  }
  $( function() {
    $( "#tags" ).autocomplete({
      source: filterUniversities,
    });
  });
</script>

In either case you first need to set the list of universities. Looking at your code I see that you're rendering the data so an API is out of the question. You can do something like

<script>
    var universities = [{% for uni in universities %}{{ uni }}{% if not forloop.last %}, {% endif %}{% endfor %}]
</script>

You might need to replace the {{ uni }} part with appropriate code.

There is also something called json_script that I've never tried but might work better.

{{ universities|json_script:"university-data" }}

And then your script simply becomes

<script>
  var universities = JSON.parse(document.getElementById('university-data').textContent);
  $( function() {
    $( "#tags" ).autocomplete({
      source: universities.map(uni => uni.name),
    });
  });
</script>

The first one will slow down your initial render time since the frontend has to compute that list right at the start.
The second will make every search slow instead.
Of course both these delays are in ms and should be much lesser than making a call to the server.

Improving backend query

If you prefer to still keep the search async and call your search query method each time, you can increase the delay before auto complete makes the call. By default it waits 300ms after the user stops typing before making performing the search.

In terms of the actual query

def search_query(request):
    term = request.GET.get('term')
    if term:
        names = list(
            University.objects
            .filter(name__icontains=request.GET.get('term'))
            .values_list("name", flat=True)
        )
    return JsonResponse(names, safe=False)

The main changes here are:

  1. We're fetching only the university names from the db. Each university might contain lots of data, fetching it all can be slow. If we fetch only what's needed it's much quicker.
  2. We're casting the queryset to a list rather than building a new list with append. Doing append in a loop is the slowest way of creating a list. List comprehensions is much quicker, but in this case we don't even need that. Simply casting to a list is the quickest.

We've also taken the term into a variable and checked that. Not a huge speed gain but if the term is blank (i.e request.GET['term'] = '') we don't want to do the search. 50k rows shouldn't be a lot for any database. If you still find that the db is too slow you will need to analyse your queries and see how it can be optimized. Some sort of index might be needed, but that is beyond the scope of this answer, and should mostly likely not be necessary.

Upvotes: 1

Related Questions