Kayhan Moharreri, Ph.D.

Senior Data Scientist
Apple Inc.

Email: kayhan [at] moharreri [dot] org
Location: Apple Inc., Sunnyvale, CA
I am a senior data scientist working in the Applied Machine Learning division at Apple. I design, develop, deploy and productionize Machine Learning and Optimization solutions to achieve operational excellence and superior customer experience.

Before joining Apple, I was a principal research scientist at Accenture Operations responsible for design and development of Artificial Intelligence solutions accelerating human-intensive business processes. Prior to that, I was a postdoctoral researcher in the Interactive Data Systems Lab working with Prof. Arnab Nandi at The Ohio State University. Before that, I received my PhD in Computer Science and Engineering from The Ohio State University in 2017 where I worked with Prof. Jay Ramanathan and Prof. Rajiv Ramnath on Resolution Recommendations and Information Extraction over IT Service Operations. I also received my M.Sc. in Computer Science and Engineering from The Ohio State University in 2015, and B.Sc. in Computer Science from Shahid Beheshti University in 2010.
Research Interests

My research interests are machine learning, data mining, and knowledge discovery. Since 2011, I have contributed in developing different decision support systems providing machine learning solutions in areas such as, expert finding, emergency response management, and automated grading. My research focus during my PhD was on process discovery in collaborative expert networks, and resolution recommendations for IT service management.
Recent Publications

* Service Level Aware Queue Management for Reliable Expert collaborations [in preparation]
   Journal of Expert Systems with Applications
   Kayhan Moharreri, Sobhan Moosavi, Jayashree Ramanathan, Rajiv Ramnath


* Augmenting Collective Expert Networks to Improve Service Level Compliance [link, bib]
   Doctoral Dissertation, 2017
   Kayhan Moharreri
+ abstract
This research introduces and develops the new subfield of large-scale collective expert networks (CEN) concerned with time-constrained triaging which has become critical to the delivery of increasingly complex enterprise services. The main research contribution augments existing human-intensive interactions in the CEN with models that use ticket content and transfer sequence histories to generate assistive recommendations. This is achieved with a recommendation framework that improves the performance of CEN by: (1) resolving incidents to meet customer time constraints and satisfaction, (2) conforming to previous transfer sequences that have already achieved their Service Levels; and additionally, (3) addressing trust to encourage adoption of recommendations. A novel basis of this research is the exploration and discovery of resolution process patterns, and leveraging them towards the construction of an assistive resolution recommendation framework. Additional interesting new discoveries regarding CENs include existence of resolution workflows and their frequent use to carry out service-level-effective resolution on regular content. In addition, the ticket-specific expertise of the problem solvers and their dynamic ticket load were found to be factors in the time taken to resolve an incoming ticket. Also, transfers were found to reflect the experts' local problem-solving intent with respect to the source and target nodes. The network performs well if certain transfer intents (such as resolution and collective) are exhibited more often than the others (such as mediation and exploratory).

The assistive resolution recommendation framework incorporates appropriate strategies for addressing the entire spectrum of incidents. This framework consists of a two-level classifier with the following parts: (1) content tagger for routine/non-routine classification,(2) A sequence classifier for resolution workflow recommendation, (3) Response time estimation based on learned dynamics of the CEN (i.e. Expertise, and ticket load), and (4) transfer intent identification. Our solution makes reliable proactive recommendations only in the case of adequate historical evidence thus helping to maintain a high level of trust with the interacting users in the CEN. By separating well-established resolution workflows from incidents that depend on experts' experiential and 'tribal' knowledge for the resolution, this research shows a 34% performance improvement over existing content-aware greedy transfer model; it is also estimated that there will be a 10% reduction in the volume of service-level breached tickets.

The contributions are shown to benefit the enterprise support and delivery services by providing (1) lower decision and resolution latency, (2) lower likelihood of service level violations, and (3) higher workforce availability and effectiveness. More generally, the contributions of this research are applicable to a broad class of problems where time-constrained content-driven problem-solving by human experts is a necessity.

* Motivating Dynamic Features for Resolution Time Estimation within IT Operations Management [pdf, link, bib]
   IEEE BigData Workshop on Big Data for Cloud Operations Management (BigData BDCOM), 2016
   Kayhan Moharreri, Jayashree Ramanathan, Rajiv Ramnath
+ abstract
Cloud-based services today depend on many layers of virtual technology and application services. Incidents and problems that arise in such complex operational environments are logged as a ticket, worked on by experts and finally resolved. To assist these experts, any machine recommendation method must meet the following critical business requirements: 1) the ticket must be resolved, meeting specific time constraints or Service Level Targets (SLTs), and 2) any predictive assistance must be trustworthy. Existing research uses probabilistic models to recommend transfers between experts based on limited features intrinsic to the ticket content, and does not demonstrate how to meet SLTs. To address this lack of research and ensure SLT-compliance for an incoming ticket given its recommended sequence of experts, there needs to be an accurate time-to-resolve (TTR) estimation. This research aims to identify important features for modeling time-to-resolve estimation given the routing recommendation sequences. This work particularly makes the following contributions: 1) constructs a framework for assessing TTR estimations and their SLT-compliance, 2) applies the assessment to a baseline estimation model to identify the need for better TTR modeling, and 3) uses language modeling to study the impact of anomalous content on the estimation error, and 4) introduces a set of dynamic features, and a methodology to rigorously model the TTR estimation.

* Probabilistic Sequence Modeling for Trustworthy IT Servicing by Collective Expert Networks [pdf, link, bib]
   IEEE International Conference on Computers, Software & Applications (COMPSAC), 2016
   Kayhan Moharreri, Jayashree Ramanathan, Rajiv Ramnath
+ abstract
Within the enterprise the timely resolution of incidents that occur within complex Information Technology (IT) systems is essential for the business, yet it remains challenging to achieve. To provide incident resolution, existing research applies probabilistic models locally to reduce the transfers (links) between expert groups (nodes) in the network. This approach is inadequate for incident management that must meet IT Service Levels (SLs). We show this using an analysis of enterprise 'operational big data' and the existence of collective problem solving in which expert skills are often complementary and are applied in sequences that are meaningful. We call such a network - 'Collective Expert Network' (or CEN). We propose a probabilistic model which uses the content-base of transfer sequences to generate assistive recommendations that improves the performance of CEN by: (1) resolving incidents to meet customer time constraints and satisfaction (and not just minimize number of transfers), (2) conforming to previous transfer sequences that have already achieved their SLs, and additionally (3) address trust in order to ensure adoption of recommendations. We present a two-level classification framework that learns regular patterns first and then recommends SL-achieving sequences on a subset of tickets, and for the remaining directly recommends knowledge improvement. The experimental validation shows 34% accuracy improvement over other existing research and locally applied generative models. In addition we show 10% reduction in the volume of SL breaching incidents, and 7% reduction in MTTR of all tickets.

* Cost-Effective Supervised Learning Models for Software Effort Estimation in Agile Environments [pdf, link, bib]
   IEEE International Workshop on Quality Oriented Reuse of Software (COMPSAC QUORS), 2016
   Kayhan Moharreri, Alhad Sapre, Jayashree Ramanathan, Rajiv Ramnath
+ abstract
Software development effort estimation is the process of predicting the most realistic effort required to develop or maintain software. It is important to develop estimation models and appropriate techniques to avoid losses caused by poor estimation. However, no method exists that is the most appropriate one for Agile Development where frequent iterations involve the customer causing time consuming estimation process. To address this an automated estimation methodology called "Auto-Estimate" is proposed complementing Agile's manual Planning Poker. The Auto-Estimate leverages features extracted from Agile story cards, and their actual effort time. The approach is justified by evaluating alternative machine learning algorithms for effort prediction. It is shown that selected machine learning methods perform better than Planning Poker estimates in the later stages of a project. This estimation approach is evaluated for accuracy, applicability and value, and the results are presented within a real-world setting.

* Recommendations for Achieving Service Levels within Large-scale Resolution Service Networks [pdf, link, bib]
   ACM Compute, 2015
   Kayhan Moharreri, Jayashree Ramanathan, Rajiv Ramnath
+ abstract
A new recommendation framework that addresses the correct and quick resolution of incidents that occur within the complex systems of an enterprise is introduced here. It uses statistical learning to mediate problem solving by large-scale Resolution Service Networks (with nodes as technical expert groups) that collectively resolve the incidents logged as tickets. Within the enterprise a key challenge is to resolve the tickets arising from operational big data (1) to the customers' satisfaction, and (2) within a time constraint. That is, meet the service level (SL) goals. The challenge in meeting SL is the lack of a global understanding of the types of needed problem solving expertise. Consequently, this often leads to ticket misrouting to experts that are inappropriate for solving the next increment of the problem. The solution here proposes a general two-level classification framework to recommend a SL-efficient sequence of expert groups that jointly can resolve an incoming ticket. The experimental validation shows 34% accuracy improvement over existing locally applied generative models. Additionally, recommended sequences are above 96% likely to meet the enterprise SL goals, which reduces the SL violation rate by 29%. Recommendations are suppressed in the case of non-routine content which is automatically flagged for special attention by humans, since here the humans outperform statistical models.

* EvoGrader: Automated Online Formative Assessment Tool for Evaluating Written Explanations [pdf, link, bib]
   Evolution:Education and Outreach, 2014
   Kayhan Moharreri, Minsu Ha, Ross Nehm
+ abstract
EvoGrader is a free, online, on-demand formative assessment service designed for use in undergraduate biology classrooms. EvoGrader's web portal is powered by Amazon's Elastic Cloud and run with LightSIDE Lab's open-source machine-learning tools. The EvoGrader web portal allows biology instructors to upload a response file (.csv) containing unlimited numbers of evolutionary explanations written in response to 86 different ACORNS (Assessing COntextual Reasoning about Natural Selection) instrument items. The system automatically analyzes the responses and provides detailed information about the scientific and naive concepts contained within each student's response, as well as overall student (and sample) reasoning model types. Graphs and visual models provided by EvoGrader summarize class-level responses; downloadable files of raw scores (in .csv format) are also provided for more detailed analyses. Although the computational machinery that EvoGrader employs is complex, using the system is easy. Users only need to know how to use spreadsheets to organize student responses, upload files to the web, and use a web browser. A series of experiments using new samples of 2,200 written evolutionary explanations demonstrate that EvoGrader scores are comparable to those of trained human raters, although EvoGrader scoring takes 99% less time and is free. EvoGrader will be of interest to biology instructors teaching large classes who seek to emphasize scientific practices such as generating scientific explanations, and to teach crosscutting ideas such as evolution and natural selection. The software architecture of EvoGrader is described as it may serve as a template for developing machine-learning portals for other core concepts within biology and across other disciplines.

Teaching
  1. Graduate Teaching Associate, Computer Science and Engineering, Ohio State University:

    1. CSE 1222: Introduction to C++ (Course Instructor)                  Fall 2013 & Summer 2017
    2. CSE 5243: Introduction to Data Mining (Course Grader)         Fall 2016 & Spring 2017

  2. Teaching Assistant, Computer Science, Shahid Beheshti University:

    1. Data Structures and Algorithms         Spring 2010
    2. Databases & SQL                              Spring 2009 & Spring 2010
    3. Calculus I                                          Fall 2007

Other Experience
  1. Data Scientist, Nationwide Insurance, Columbus, OH, May 2013 - May 2016: Worked as a part-time consultant providing data-driven process improvement solutions for IT Service Management:

    1. Developed a customizable topic discovery tool to operate over high business impact incidents in a large-scale IT infrastructure, using topic modeling and SVM over problem-resolution datasets

    2. Designed and developed a risk estimator for IT infrastructure releases determining whether an upcoming infrastructure change leads to a high impact incident, using logistic regression and regularization

    3. Designed and developed a recommendation engine for ticket resolution paths to effectively achieve Service Level Agreements, using unique generative models for ticket routing

  2. Software Engineering Intern, Wellpoint, Inc., Wallingford, CT, Summer 2012:

    1. Developed a named entity de-identification tool for anonymization of patients' sensitive records, IBM Watson, pre-authorization of medical treatment

  3. More information about Kayhan's professional experience can be found on LinkedIn.


**Page last updated: February, 2021