Methodology

How this research is conducted, what gets measured, and the ethical boundaries that guide fieldwork

This page documents the methods I use to study anticipatory care in hospitality environments.

The goal: understand how practitioners make judgment calls under constraint, then investigate whether transparent intelligence tools could enhance—not replace—that human judgment.

The approach: direct field observation, structured operator interviews, pattern extraction, and technical experimentation with explainable cognitive architectures.

This work sits at the intersection of ethnographic research, service design, and explainable AI. The methods draw from anthropology (participant observation, semi-structured interviews), operations research (timing studies, service blueprinting), and human-computer interaction (prototype testing, usability observation).

Everything documented here is designed to be replicable, ethical, and operator-centered.


Research Questions

Primary Question

How do expert hospitality practitioners anticipate guest needs under real constraints?

Sub-questions:

  • What contextual cues do operators read when making service decisions?
  • How do they balance personalization with efficiency?
  • What mental models guide their judgment under time pressure?
  • Where do existing tools help vs. hinder that intelligence?

Secondary Question

Can transparent cognitive architectures support—not replace—operator judgment?

Sub-questions:

  • What does "explainability" mean to a front desk agent troubleshooting a recommendation?
  • How much context is enough for a system to feel attentive vs. invasive?
  • What does graceful failure look like when technology breaks during service?
  • Can systems learn from operator corrections without requiring explicit training?

Field Sites & Selection Criteria

Types of Properties Studied

Boutique hotels (8-30 rooms)

  • Small enough that staff know most guests by name
  • Large enough to face coordination challenges across shifts
  • Examples: Independent properties in Barcelona, Lisbon, Paris

Vacation rentals (1-5 properties per manager)

  • Direct host-guest relationships
  • High variability in guest expectations
  • Limited on-site staff; remote management common

Cafés & hospitality spaces

  • Repeat customer relationships
  • Real-time service adaptation
  • Visible service choreography (open kitchens, counter service)

Ryokans & culturally-specific hospitality models

  • Deep tradition of anticipatory care
  • Explicit codification of service rituals
  • Cross-cultural communication challenges

Selection Criteria

Properties must meet at least 3 of these:

  1. Small team size (under 20 staff) - where individual judgment matters
  2. High guest satisfaction (reviews mentioning "attentive," "anticipatory," "personal")
  3. Operational constraints (peak hours, language barriers, tech limitations)
  4. Willingness to participate (permission for observation, interview access)
  5. Diversity of context (different cultures, property types, service models)

I prioritize properties where care happens despite constraints—not because they have enterprise budgets or perfect tools.


Field Observation Protocol

What I Observe

  • Arrival sequences (first 5-10 minutes): Greeting ritual, information gathering, context reading, service choreography.
  • Peak operations: Multiple arrivals/departures, system failures, staff coordination, triage decisions.
  • Transition moments: Shift handoffs, daily briefings, information transfer protocols.
  • Breakdown & recovery: Failure modes, operator improvisation, guest escalation, post-incident reflection.

What I Measure

  • Timing: Arrival-to-greeting interval, check-in duration, response times, queue lengths.
  • Counts: Staff-to-guest ratios, interaction frequency, touchpoints per stay, system access patterns.
  • Artifacts: Service scripts, backup protocols, checklists, physical tools.
  • Qualitative signals: Tone shifts, body language cues, moments of visible care, guest reactions.

Field Notes Format

Each observation produces a Field Report with context, scene narrative, measurements, patterns, questions, and operator reflection.

Property: 14-room boutique, Barcelona

Date: Nov 9, 2025, 3:20 PM

Weather: Light rain

Staff on duty: 2 (front desk + housekeeping)

Scene:

Family of four arrives 40 minutes early. Host greets at door with umbrella before they exit taxi. Offers tea in lounge while room finishes cleaning. Notices kids (ages ~6, 8) and brings coloring books within 2 minutes. Parents visibly relax.

Timing:

  • Door open: 4 seconds after taxi stops
  • Greeting to seating: 45 seconds
  • Coloring books appear: 1:52 after seating

Pattern observed:

Pre-emptive need anticipation based on minimal visual cues (umbrella ready = saw weather; coloring books = saw kids in taxi through window).

Operator quote (debrief):

"I always check the weather when I arrive for shift. If it's raining, umbrellas go by the door. And I saw the kids through the window—we keep activities in the desk drawer for families."

Observation Ethics

  • Minimal disruption: I observe from positions that don't interfere with service flow
  • Guest privacy: I never record guest names, personal details, or identifying information
  • Operator consent: Staff know I'm observing and can ask me to pause or leave
  • No photography of guests: Only staff, tools, and environments (with permission)
  • Debriefs, not surveillance: I explain what I'm studying and why

Operator Interview Protocol

Interview Structure

Semi-structured format (45-60 minutes), conducted in operator's preferred location. Audio recorded only with explicit consent; otherwise handwritten notes.

Core Questions

Opening (context-setting)

  • Tell me about your property/operation—what makes it unique?
  • Walk me through a typical day. Where are the peaks and valleys?
  • Who are your guests? How would you describe them to a new hire?

Expertise & judgment

  • Describe a moment where you really nailed anticipatory service—where you knew what someone needed before they asked.
  • How did you know? What cues were you reading?
  • What's the difference between a good interaction and an exceptional one?

Constraints & failure

  • Tell me about a time your system failed—tech broke, information was wrong, someone was unhappy.
  • How did you adapt? What did you do differently next time?
  • What constraints make your job hardest? (Time, tools, information, staffing?)

Tools & intelligence

  • What tools do you actually use every day? What do you wish existed?
  • What information do you track manually because your PMS/software doesn't?
  • If you could give new staff one checklist or protocol, what would it be?

Compensation & Credit

  • No payment for interviews (to avoid bias toward positive framing)
  • Service audits offered: I provide a 1-page summary of observations as value exchange
  • Attribution: Operators can choose: full name, first name only, anonymized role, or no attribution
  • Review before publishing: Any quotes or stories shared publicly get operator approval first

Pattern Extraction & Analysis

After field visits, I synthesize raw notes into Pattern Cards—structured artifacts that capture transferable principles.

Pattern Name: Pre-emptive Disambiguation

Context: Guest requests that could mean multiple things

Constraint: Clarifying takes time; guessing wrong wastes more time

The Move: Offer 2-3 specific options instead of asking open-ended questions

→ "Need a dinner spot—romantic date or business casual?"

→ "Coffee nearby—quick espresso or sit-down café?"

Why It Works: Reduces decision fatigue; shows you're thinking ahead

Operator Economics: Requires training to recognize ambiguity patterns

Portable Checklist:

  • List common ambiguous requests (coffee, restaurant, pharmacy)
  • Pre-write 2-3 clarifying options for each
  • Train staff to offer options vs. asking "what kind?"

Related Patterns: Layered Attention, Context Reading

Validation

Patterns get validated through:

  1. Operator feedback: "Does this match your experience?"
  2. Multiple sightings: Observed at 3+ different properties
  3. Reproducibility: Other operators recognize and can apply it
  4. Edge case testing: Where does the pattern break down?

Technical Experimentation

I explore whether BDI (Beliefs-Desires-Intentions) cognitive architectures can model how operators think—and whether that modeling makes intelligence more transparent and learnable.

Not building: Black-box automation, "AI chatbots," efficiency dashboards.

Exploring: Explainable reasoning systems that show their work, learn from corrections, and preserve human judgment.

Prototype Testing Protocol

  • Small-scale mockups: Shown to 3-5 operators for feedback on explainability.
  • Wizard-of-Oz testing: Manual simulation of intelligent systems to test assumptions.
  • Limited field pilots: With 1-2 operator partners, designed for learning, not efficiency.
  • Transparent failure analysis: Documenting why things break publicly.

Ethics of AI Experimentation

  • No guest data collection without explicit consent
  • Operator override always available
  • No hidden automation—guests and operators know when they're interacting with AI
  • Graceful degradation—if system fails, service continues without it
  • Explainability required—every recommendation shows its reasoning

Data & Privacy

What I Collect

  • Timing measurements (anonymous)
  • Service flow diagrams (no names)
  • Photos of tools/spaces (no guests)
  • Operator quotes (with permission)

What I Never Collect

  • Guest names or personal details
  • Identifiable reservation data
  • Financial information
  • Proprietary systems details

Anonymization Protocol

  • Default: First name + role + city (e.g., "Ana, front desk manager, Lisbon")
  • On request: Role + property type only (e.g., "GM at 12-room boutique")
  • Venue details: Room count and city only; photos edited to remove branding if requested

Limitations & Boundaries

What This Research Can't Do

  • Can't generalize to large-scale operations: I study small teams where individual judgment matters.
  • Can't separate observer effect: My presence changes behavior, though I minimize disruption.
  • Can't capture everything: Some intelligence is tacit and hard to articulate.
  • Can't promise outcomes: This is exploratory research, not a product pitch.

Bias & Positionality

My background shapes what I see:

  • Product design lens → I notice tools and interfaces more than interpersonal dynamics
  • Tech experience → I may overvalue technical solutions
  • Privileged traveler → I access properties/experiences not available to all guests
  • Limited to English + Spanish → I miss nuances in other languages

I compensate by hiring translators, seeking challenging views, and inviting corrections.


Research Timeline

Current Phase (Nov 2025 - Mar 2026)

Field Observation & Pattern Extraction

  • 12-15 site visits across Barcelona, Lisbon, Paris, Berlin
  • 20+ operator interviews
  • Document 10-15 transferable patterns
  • Publish weekly field notes

Next Phase (Apr 2026 - Aug 2026)

Pattern Validation & Technical Exploration

  • Return visits to validate patterns
  • Cross-cultural comparison
  • Technical prototypes for explainability architectures
  • Pilot 2-3 Pattern Cards with operator partners

Future Phase (Sep 2026+)

Potential Pilot Studies

  • Small-scale field tests with 1-2 properties
  • Transparent reporting on what works/fails
  • Decision point: pursue product development or continue pure research

Timeline is flexible—research follows interesting threads, not arbitrary deadlines.


Publications & Outputs

Field Notes

Narrative observations, measurements, and operator quotes.

Conversations

Structured interviews with practitioners.

Traces

Deep dives into specific patterns and cross-site comparisons.

Design Notes

Technical experiments, prototypes, and failure analysis.


Collaboration & Participation

How Operators Can Participate

  • Nominate your property: Email me with your details. I provide a service audit in exchange.
  • Share a ritual or failure story: Contribute to collective practitioner intelligence.
  • Review Pattern Cards: Help validate what's transferable vs. context-specific.

How Researchers/Builders Can Participate

  • Compare notes on transparent AI: Share research on explainability and cognitive systems.
  • Contribute technical experiments: Propose prototype tests grounded in field observations.

How Anyone Can Contribute

  • Share examples of technology enhancing or degrading care.
  • Question assumptions in my field notes.

Contact & Transparency

Email: carlos@omote.io

What I'm happy to share: Detailed field notes (anonymized), interview protocols, Pattern Cards, technical explorations.

What I can't share: Identifiable operator insights without permission, proprietary systems, raw field data with venue names.

Corrections: If I misinterpret something, email me. I'll correct promptly and note the update.


Why This Methodology Matters

Most hospitality AI is built by people who've never worked a front desk shift. Systems get designed in conference rooms, optimized for metrics that don't capture what makes care human.

This research starts from the opposite direction: operators first, technology second.

By documenting how expert practitioners make judgment calls under constraint—and only then exploring whether transparent architectures could enhance that intelligence—I'm betting we can build tools that make care more human, not less.

The methodology is designed to earn that right:

  • Systematic observation (not casual blogging)
  • Ethical boundaries (not exploitative data mining)
  • Operator-centered (not user research for product development)
  • Public documentation (not proprietary competitive advantage)

This is slow, careful work. But if hospitality intelligence is worth operationalizing, it's worth studying properly first.

Further Reading

Methodological Influences

  • Ethnographic observation: Geertz, Spradley
  • Service design: Lynn Shostack
  • Tacit knowledge: Michael Polanyi
  • Participatory design: Scandinavian tradition
  • Explainable AI: Tim Miller

Related Pages

  • About - Background and research questions
  • Journal - Field notes and observations

This methodology page is a living document. As the research evolves, so will the methods. Last updated: November 2025.