AI Disclosures

Learn more about LANDED's AI systems and risk management.

LANDED's AI Systems and Risk Management

Updated September 5, 2025

LANDED partners with employers to streamline hiring through the use of artificial intelligence (AI) systems (our “Systems”).This disclosure is intended to be a clear, public summary of these Systems and how we manage the risks of algorithmic discrimination.

Types of High-Risk AI Systems We Develop

LANDED provides an AI-powered recruiting platform designed to make hiring faster and more equitable. Our Systems are used to assist employers in making employment decisions and include:

  • AI-Powered Conversational Assistant: This System engages candidates via SMS text message, personalizes conversations, and conducts initial screening based on objective, job-related criteria defined by the hiring manager (e.g., availability, required certifications, work experience).
  • Intelligent AI Scheduler: For candidates who meet the employer's screening criteria, this System automatically coordinates interview times based on the mutual availability of the candidate and the hiring manager.

These Systems are intended to reduce hiring delays, provide consistent screening, and increase candidate engagement. The AI does not make the final hiring decision for candidates that pass any objective screens established based on employer-defined job-related criteria; it only assists in the screening and scheduling phases of the hiring process.

How We Manage Risks of Algorithmic Discrimination

LANDED is committed to the fair and ethical use of AI. We manage the known and reasonably foreseeable risks of algorithmic discrimination in our high-risk systems through several key safeguards:

  • Prohibiting the Use of Biased Data: Our systems are designed to screen candidates based only on the unique, job-related hiring criteria set by the employer. We do not request, collect, or use sensitive demographic data (such as race, ethnicity, sex, or religion) to train or operate our AI screening systems.
  • Independent Bias Audits: We engage a qualified, independent third party to conduct annual bias audits of our AI systems. These audits test for potential discriminatory impacts and ensure the systems are functioning as intended.
  • Human Oversight and Intervention: We require our customers (employers) to provide human oversight. Candidates always have the right to request an alternative screening process, ask for a human review of an automated decision, or request corrections to their information.
  • Ongoing Internal Monitoring: Our technical teams regularly review system outputs and behaviors to confirm alignment with job-related criteria and to identify and correct any potential sources of bias.

More detailed documentation regarding system functionality, data usage, limitations, and responsibilities is provided directly to the employers who use our Systems (the “Deployers”) to help them fulfill their own compliance obligation.

LANDED's AI Systems and Risk Management

Document Version: 1.0 Last Updated: September 5, 2025

Disclaimer: This document is for informational purposes only and is not a substitute for legal advice. LANDED provides these resources and templates to assist you, but you, the Deployer, are solely responsible for your own legal and compliance obligations under applicable laws. You should consult with qualified legal counsel to ensure your policies and practices are fully compliant.

Introduction & Purpose

This document is provided to you, the Deployer of the LANDED AI recruiting platform, to assist you in meeting your compliance obligations. It contains detailed information about our AI systems, their intended use, their performance and limitations, and the measures we take to mitigate algorithmic discrimination.

This information is specifically designed to support your internal Risk Management Program and your legally required Annual Impact Assessment.

AI System Overview

  • System Name: LANDED AI Recruiting Platform
  • System Components:
    • AI Conversational Assistant: Engages and screens applicants via SMS based on criteria you provide.
    • Intelligent AI Scheduler: Coordinates interviews for candidates who pass the initial screening.
  • Intended Use Cases:
    • To conduct initial, conversational screening of applicants based on objective, job-related criteria (e.g., availability, skills, work experience, required licenses).
    • To automate the scheduling of interviews to accelerate the hiring process.
    • To provide consistent engagement with all applicants.
  • Known Harmful or Inappropriate Uses (Prohibited):
    • Using the system to make employment decisions based on any protected characteristic (e.g., race, sex, ethnicity, religion).
    • Configuring screening criteria that are not directly and demonstrably related to the essential functions of the job.
    • Using the system as the sole basis for a final hiring decision without any human review or judgment.
    • Failing to provide reasonable accommodations or alternative screening processes for applicants upon request.

Data Usage & Governance

  • Data Used to Train the System: Our models are trained on large volumes of anonymized and aggregated data, including:
    • Employer-provided job descriptions and screening criteria.
    • Anonymized candidate application data and conversational text content.
    • Publicly available data on job roles and skill requirements.
    • Note: We exclude sensitive demographic data from our training datasets. In addition, we implement proxy‑risk controls: no use of variables that could act as proxies for protected attributes are in our training data.
  • Data Processed During Operation: The system processes the following data you and the candidate provide:
    • Your configured screening questions and criteria for a specific role.
    • A candidate's direct responses to screening questions via SMS.
    • Candidate-provided data such as work history and availability.
    • Operational logs (response times/dates)

Data Handling: Customer Data is stored in secure cloud environments within the United States. Please see our Terms of Service for full details on data processing and ownership of Aggregated Data.

System Performance, Limitations, and Bias Mitigation

This section provides critical information for your Impact Assessment.

  • Performance & Evaluation:
    • The system is evaluated based on its accuracy in applying the screening criteria you define.
    • Outputs are binary ("Meets Criteria" / "Does Not Meet Criteria") based on candidate responses.
  • Known System Limitations:
    • The system can only evaluate information directly provided by the candidate; it cannot independently verify the accuracy of self-reported work history or skills.
    • The system is not designed to interpret nuanced, ambiguous, or sarcastic language outside the context of the screening questions.
    • The quality of the screening output is directly dependent on the quality and job-relatedness of the screening criteria you provide.

LANDED Toolkit for Deployer Compliance

To support your compliance efforts, we provide the following resources in our customer portal:

  • Template: Website & Job Description Notice: A template statement for your careers page and job descriptions disclosing the use of an AI system.
  • Template: Adverse Action Notice: A template for notifying a candidate of the automated decision, explaining the principal reason(s), and providing instructions for requesting human review.

Best Practices Guide: A guide for crafting fair, objective, and job-related screening criteria.