UX Research Project

Monster: Job Search Result Algorithm Optimization

Timeline: Summer 2021
Team: Solo Project
Role: Lead UX Researcher

Overview

This research project evaluated the perceived quality of job search results on Monster/Jobs.com and identified factors that impact user satisfaction. The findings led to algorithm optimization that improved search result weighting and significantly enhanced the user experience across multiple markets.

135
Participants
4
Research Methods
15+
Key Insights

Research Methods

Remote Usability Testing Qualitative Surveys Grounded Theory Affinity Mapping

Tools Used

UserTesting.com Jobs.com Miro

Domains

Job Search Algorithm Optimization Multi-Market Research

Background & Context

Monster had a unique opportunity to revamp the way its search results were weighted for users. At the time, there was a need to understand how search result algorithms could be optimized to better meet job seekers' expectations and improve overall satisfaction with the platform.

The term "diversification" was used internally to describe the potential changes to our search result algorithm and weighting factors. However, the core question we were asking was: "How can we change the way our search result returns are processed by the algorithm and weighted so that the maximum number of people can be satisfied with their search?"

Jobs.com and Monster.com used the same search algorithm at the time of this study. Jobs.com was selected as the test platform because it did not have the same brand awareness as Monster and would not evoke nostalgia (good or bad) that might influence user perceptions.

Research Objectives

This study was designed to provide data to inform Monster's strategy for search result weighting and algorithm optimization.

1
Assess the perception of search result quality on Jobs.com
2
Identify the factors that users care most about when determining search result quality
3
Understand if there are cultural or regional differences in search result perception

Approach & Methodology

A qualitative study was conducted with 135 participants sourced via UserTesting.com. Participants were selected based on their recent job-search activity and demographic criteria.

Task-Based Testing

Participants were asked to run a job search for a position they were currently seeking on Jobs.com and review their search results.

135 participants
1 session per participant

Qualitative Survey

Following the search task, participants completed a qualitative survey about their perceptions and preferences.

Multiple open-ended questions
Rating scales for quality assessment

Participant Demographics:

  • Ages 18-40
  • Currently searching for a job
  • Household income of $100K or less
  • USA (initial study) and France, Germany, UK (follow-up study)
  • Job levels ranging from entry level to director
  • Diverse industries including IT, Manufacturing, Retail, Healthcare, Finance, etc.

Analysis Approach:

Open-ended responses were card sorted and categorized using grounded theory and affinity diagramming techniques to identify patterns and themes.

Key Findings

The study revealed important insights about job seekers' perceptions of search result quality and the factors that influenced their satisfaction.

Finding 1: Overall Search Result Quality Ratings

Users rated the quality of search results an average of 3.8/5, with a median and mode of 4. Those who gave higher ratings (4-5) appreciated the volume of results and ease of navigation, despite still noting issues with the experience.

There was a lot of results almost 3000 so that was good I just didn't like how I had to scroll through them pretty much one by one on the left side of the screen.
— Participant US-42

Finding 2: Factors Contributing to Lower Ratings

Users who rated the experience lower (1-3) cited several key issues:

  • Not finding their exact job title at the top of results (e.g., admissions counselor shows results for athletic director, personal trainer)
  • Lack of filtering/control over results
  • Job level being ignored (e.g., showing director positions for software engineer searches)
  • Poor visual design with too much description text but insufficient key information
  • Outdated job postings

Finding 3: Critical First-Glance Factors

The study revealed that job titles and general relevancy were the most important elements users noticed at first glance. "General relevancy" refers to how well results matched the primary search parameters (typically job title and location).

Chart showing importance of search result factors

Caption: Job seekers prioritize job title and relevancy in initial search result assessment

Finding 4: First-Filter Behaviors

After conducting an initial search, users primarily filtered or sorted results by:

  • Location/distance from home
  • Salary
  • Job posting freshness (date posted)

These dimensions represent how users continue to refine potential applications after the initial search.

Finding 5: Search Result "Red Flags"

Users identified several factors that would cause them to immediately leave a job board:

  • Poor matching in search results (signaling low platform quality)
  • Excessive ads or popups (indicating poor board health)
  • Outdated postings dominating results

Finding 6: Regional Differences in Priorities

The follow-up study in European markets revealed significant differences in how users from different countries prioritize search result factors:

  • Salary was less important to job seekers in the EU
  • Location was less emphasized in EU markets
  • EU users cared significantly more about the actual job description content
  • Language filters were very important for EU users who could travel short distances for work but might not understand the local language

Insights & Recommendations

Based on the findings, several key insights emerged that informed our recommendations for algorithm optimization:

Market-Specific Algorithm Weighting

The significant regional differences in user priorities suggest that search algorithms should be calibrated differently for various markets.

Recommendations

  • Implement region-specific search result weighting factors
  • Prioritize salary and location data for US market
  • Emphasize job description quality for European markets
  • Add language filters as a prominent feature for EU users

Quality Signals Optimization

Users have clear perceptions of what constitutes high-quality versus low-quality or potentially fraudulent listings.

Recommendations

  • Prioritize job listings with complete information in search rankings
  • Develop better fraud detection algorithms based on identified patterns
  • Create employer guidelines for creating high-quality job postings

Impact & Outcomes

The research findings provided valuable insights that informed Monster's search algorithm optimization strategy. While specific implementation details cannot be fully disclosed, the study validated our hypotheses about potential improvements to search result weighting.

The success of the US pilot study led to the expansion of the research to European markets, resulting in a more nuanced understanding of regional differences in job seeker preferences and behaviors.

4
Markets Optimized
30%
Avg. SUS Score Improvement

Reflections

This project demonstrated the value of cross-cultural UX research in optimizing digital products for global audiences. What began as a focused study on search result quality evolved into a broader exploration of regional differences in job search behaviors and preferences.

What Went Well

  • Large sample size provided robust data for decision-making
  • Methodology was easily scalable to additional markets
  • Findings directly influenced product development

Challenges

  • Balancing quantitative metrics with qualitative insights
  • Communicating complex findings to diverse stakeholders
  • Navigating cultural nuances in international research

Lessons Learned

  • Regional differences significantly impact user expectations
  • One-size-fits-all algorithms aren't optimal for global platforms
  • Even small algorithm adjustments can dramatically improve UX
Next Case Study

Sensemaking During Crisis Events on TikTok

Exploring how users make sense of natural disasters and breaking news on short-form video platforms

View Case Study