Reputation: 305
Is possible to do this in Lambda:
for (final WarehouseAddress address : warehouse.getAddresses()) {
if (!Validator.isEmpty(address.getPositions())) {
final Set<WarehouseAddressPosition> positions = new HashSet<WarehouseAddressPosition>(
address.getPositions());
if (address.getPositions().size() > positions.size()) {
throw new FieldDuplicatedException("position error");
}
}
}
I tried this without sucess:
final Set<WarehouseAddressPosition> positions = warehouse.getAddresses().stream().filter(a -> !Validator.isEmpty(a.getPositions())).collect(
Collectors.toSet());
Because he try to collect a Set of WarehouseAdress instead of a Set of WarehouseAdressPosition.
Error:(130, 156) java: incompatible types: inference variable T has incompatible bounds equality constraints: br.com.erp.ejb.entity.sales.WarehouseAddressPosition lower bounds: br.com.erp.ejb.entity.sales.WarehouseAddress
In Warehouse class I have:
private List<WarehouseAddress> addresses;
In WarehouseAddress class I have:
private List<WarehouseAddressPosition> positions;
Explanation: This code gets the original list of positions and collects it in a Set, to make a collection without duplicated values and compare size with the original list, to check if it is duplicated. Because Set does not accept duplicated values the size will be different in this case.
Upvotes: 1
Views: 1989
Reputation: 394016
If I understand correctly, you want to collect WarehouseAddressPosition
, not WarehouseAddress
, so you need :
final Set<WarehouseAddressPosition> positions =
warehouse.
getAddresses().
stream().
filter(a -> !Validator.isEmpty(a.getPositions())).
flatMap(a -> a.getPositions().stream()).
collect(Collectors.toSet());
If you wish to fail the process when you encounter duplicate WarehouseAddressPosition
in the same WarehouseAddress
, I would suggest to change private List<WarehouseAddressPosition> positions
to be a Set. This way you can find such duplicates before collecting the positions of all the addresses.
If you have a good reason not to make this change, you can make the same duplicate check (currently done in your original code) in the filter
step (I hope I got the syntax right, since I have no way to check it right now) :
final Set<WarehouseAddressPosition> positions =
warehouse.
getAddresses().
stream().
filter(a -> if (!Validator.isEmpty(a.getPositions())) {
final Set<WarehouseAddressPosition> positions = new HashSet<WarehouseAddressPosition>(
a.getPositions());
if (a.getPositions().size() > positions.size()) {
throw new FieldDuplicatedException("position error");
} else {
return true;
}
} else {
return false;
}).
flatMap(a -> a.getPositions().stream()).
collect(Collectors.toSet());
Upvotes: 7