Reputation: 463
So I was trying to optimize my code as much as possible. The following code used to run on 5,something seconds, however I managed to reduce it to about 1,4 seconds, however it's still not enough. What can I do to optimize this code even more? (Maybe I should mention that the times I talked about happen when the aux Map ends up with 170080 keys).
public List<String> getProdutosMaisCompradosQuantidade(int filial, int X){
Map<String, ProdutoFilial> aux;
if(filial==0) {
aux = new HashMap<>(ValoresFixos.CATALOGO_PRODUTOS_TAMANHO_INICIAL);
filiais.stream()
.forEach( (f) -> {
Map<String, ProdutoFilial> aux2 = f.getMapProdutosDadosFilialSemEncapsulamento();
aux2.forEach((k,t) -> {
if(t.getQuantidade()>0){
if(aux.containsKey(k)) aux.get(k).atualizarValores(t);
else aux.put(k,t);
}
});
});
}
else aux = filiais.get(filial-1).getMapProdutosDadosFilialSemEncapsulamento();
List<String> list =
aux
.entrySet()
.stream()
.sorted(new ComparadorProdutoQuantidade())
.map(e -> e.getKey()+"\n | o Quantidade: "+e.getValue().getQuantidade()+"; Comprado por "+e.getValue().getNumeroCompradores()+" Clientes Distintos")
.collect(Collectors.toList());
if(X>list.size()) X = list.size();
list.subList(X, list.size()).clear();
return list;
}
All the methods I use here are almost O(1) complexity and the comparator isn't too taxing either so that shouldn't be the problem, is there something I may not know that could help me optimize this stream operations? Maybe the entrySet I use can be avoided...? Because it's probably the most expensive operation here...
EDIT1: Maybe I should explain the idea behind this method. It's main purpose is to order the map aux and return a list with the keys ordered (the keys are also modified but that's not the main intention)
Upvotes: 0
Views: 2884
Reputation: 8200
You could try to replace the forEach
statements with map
and collect
, as described in Java 8 - Best way to transform a list: map or foreach?
Then you could try if using a parallel stream improves your performance.
It is probably possible to replace your statement which creates the aux
map with the Stream API (using Collectors.toMap and/or Collectors.groupingBy). This is considered cleaner than using forEach
, which uses stateful operations.
There are already quite many question for how to do that https://stackoverflow.com/search?q=groupingBy+[java-stream] or https://stackoverflow.com/search?q=toMap+[java-stream]
If you need a quicker solution (with less changes), you could try to replace the Map
with a ConcurrentHashMap
and use a parallel stream. You can use its merge function to make your computation parallelizable.
Upvotes: 1