Reputation: 2959
I'm working with SQLAlchemy (1.4/2.0 Transitional style) to access a PostgreSQL database. I am using group_by to count the number of unique names ordered by the count. What I need to do is then apply additional group_by to count the same thing, but with different conditions. For example, if I have this User object
class User(Base):
__tablename__ = "user"
user_id = Column(Integer, primary_key=True)
name = Column(String)
city = Column(String)
I can get the top 100 most popular User names and the counts of those with that name using:
stmt = select(User.name, func.count(User.user_id).label("total")) \
.group_by(User.name) \
.order_by(desc('total')) \
.limit(100)
What I need to do is to for those 100 most popular names, find how many have those names where city == "New York", "Chicago" & "Los Angeles". I'd like to get the data back something like:
name: "Charles Smith", total:526, NY:28, Chi:17, LA:23
Any idea how to do this?
Upvotes: 0
Views: 190
Reputation: 14861
Is straight SQL can use the FILTER clause on count to get the individual city counts with standard count to for the total: See example here
select name, total, ny, chi, la
from ( select name
, count(*) total
, COUNT(*) FILTER (WHERE city='Chicago') chi
, COUNT(*) FILTER (WHERE city='Los Angeles') la
, COUNT(*) FILTER (WHERE city='New York') ny
from users
group by name
) sq;
I leave it to you to convert to SQLAlchemy; since I can read it to understand, but am not comfortable attempting writing it.
Upvotes: 1