Reputation: 3802
I am implementing a employee planning solution where staff can have their preferred work times and this system can also recommend the best time a staff should work.
To provide recommendations to a staff for their working time, I'd like to have a recommendation system that can recommend a number of working shifts to staff based upon:
I have not done much homework as everything leads to Hadoop ecosystem and about hadoop I have as much idea about that as a toddler(non-prodigy) knows of quantum physics. Anyhow, here's what I come up with:
My question is, what system do you recommend me for such recommendations and is spark/predictionIO the best tools for this job?
Upvotes: 1
Views: 334
Reputation: 1109
I am implementing a employee planning solution where staff can have their preferred work times and this system can also recommend the best time a staff should work.
Your use case is really similar to employee rostering example from optaplanner. Each employee has their own preferred work times and it was write down into a contract between employee and the hospital.
Organisation's staff requirements. It is an interval(1 hour) based staff requirements and has min/max staff needed for that interval. (eg: at hrs 1300-1400, I need min 4 and max 6 staff).
The example also has the same requirement, where for every shift there are minimum staff needed.
Rules that a recommended shift has to follow. (eg: any shift provide should not exceed max_allowed_work_hours_in_week. If employee has completed 35 hours till Thursday and max_allowed_work_hours_in_week is 40 so I can only recommend shift upto 5 hours)
Those rules are all provided in employee contract, e.g. an employee must work minimum 35 hours per week or must work consecutively 3 days per week.
Recommendations also need to respect my historical shifts. (eg: I like to work in evenings on Friday and my history says so. So, a good recommendation of Friday would be an (guess what :)) eve shift.
This is could be added as a new soft constraint whenever the employee has a historical data.
I have not done much homework as everything leads to Hadoop ecosystem and about hadoop I have as much idea about that as a toddler(non-prodigy) knows of quantum physics. Anyhow, here's what I come up with: I could use apache spark or mahout OR standalone apache predictionIO.(I'm in Java world) I know constraint solvers like Optaplanner that I can push to solve this problem but it's not the right tool for this job, I believe but could be wrong.
I think you could combine Hadoop to store your big data and process it. Then you could feed the processed data to optaplanner to get an optimized result. If you want to build a realtime planning, apache spark could be used to do quick processing to new data and feed it to optaplanner to get the latest optimized result. So I really recommanded you to go and try the nurse rostering example from optaplanner. Hope this helps, kind regards.
Upvotes: 1