Responsible AI Development
Last Updated: January 16, 2026
Our Commitment
Lantern Pharma is committed to developing AI systems that are safe, transparent, and beneficial for rare cancer research. This policy outlines our principles, practices, and safeguards for responsible AI development.
withZeta is designed to accelerate drug discovery for rare cancers—diseases that affect small patient populations and receive minimal pharmaceutical investment. We recognize the profound responsibility that comes with deploying AI in medical research contexts.
1. Core Principles
1.1 Beneficence
- AI should accelerate rare cancer research and therapeutic development
- Outputs should serve underserved patient populations
- Focus on scientific discovery, not profit maximization
- Democratize access to rare disease expertise
1.2 Non-Maleficence (Do No Harm)
- Prevent misuse for patient diagnosis without professional review
- Avoid generating harmful or dangerous medical advice
- Monitor for unintended consequences
- Implement safeguards against abuse
See also: Acceptable Use Policy
1.3 Transparency
- Disclose when content is AI-generated
- Cite data sources (PubMed, ORPHANET, NCI, etc.)
- Explain limitations and uncertainties
- Document AI architecture and decision-making processes
1.4 Fairness & Equity
- Equal access during beta (no patient stratification)
- Avoid bias in rare cancer coverage
- Serve cancers regardless of commercial potential
- No discrimination based on disease rarity or market size
1.5 Accountability
- Human oversight of AI development
- Safety reporting mechanism
- Continuous monitoring and improvement
- Clear lines of responsibility
2. Development Practices
2.1 Model Selection & Transparency
- Multi-Model Approach: We use multiple AI models (both open-source and commercial) and may change models as we improve our Services
- Current Models: Include Anthropic Claude and other providers, subject to change without notice
- Selection Criteria: Medical accuracy, hallucination rates, safety alignment, API reliability, citation accuracy, refusal behavior
- Safety Focus: Prefer models with constitutional AI training, strong safety guardrails, and explicit uncertainty acknowledgment
- Ongoing Evaluation: Continuously assess model performance and switch to better models as they become available
- No Single Model Dependence: We do not rely solely on one AI provider to ensure service resilience and continuous improvement
2.2 Prompt Engineering
- System prompts emphasize uncertainty acknowledgment
- Persona-specific prompts adapt depth to question complexity
- Explicit medical disclaimer in system instructions
- Citation requirements embedded in prompts
2.3 Guardrails & Safety Mechanisms
Pre-Generation Safeguards:
- Input validation (reject PHI, direct patient queries)
- Rate limiting to prevent abuse (per-user quotas)
- User authentication and access control (AWS Cognito)
- Query pattern analysis for malicious intent detection
Post-Generation Safeguards:
- Citation requirement enforcement (tool results must be cited)
- Uncertainty language enforcement ("may", "suggests", "preliminary")
- Medical disclaimer in PDF exports
- Fact-checking against source material
Recursion Safety:
- Maximum depth limits (3-10 rounds depending on persona)
- Context window monitoring (force synthesis at 70%+ usage)
- Tool timeout limits (prevent infinite loops)
- Early termination if harmful patterns detected
2.4 Human Review
- Safety-critical outputs flagged for manual review
- Monthly audit of conversation logs (random sampling)
- User feedback integration into model improvements
- Expert review of medical accuracy claims
3. Data Practices
3.1 Training Data & User Control
- No Custom Model Training: Zeta does NOT train custom foundational models on user data
- Third-Party Models: AI models from providers (e.g., Anthropic Claude) are pre-trained on public datasets (we do NOT control their training data)
- Commercial API Protection: Commercial API providers do NOT train models on Zeta user data (per commercial API terms)
- Our Role: Tool integration, prompt engineering, and service orchestration (not foundational model training)
- User Opt-Out: Users can opt-out of having their conversations used for improving Zeta's Services (system prompts, tool selection, etc.) in account settings
- Opt-Out Exceptions: Even when opted-out, data may be used for safety reviews, explicit user feedback, and legal compliance
3.2 User Data Privacy
- Conversation history stored for retrieval and user experience (not sent to third-party AI providers for training)
- No user data sold to third parties
- Anonymized usage analytics (opt-in only)
- Users control training data opt-in/opt-out in account settings
- See Privacy Policy for complete details on data handling and your rights
3.3 Third-Party Data Attribution
All external data sources clearly attributed:
- PubMed citations include PMID and DOI
- ORPHANET disease codes cited
- NCI Thesaurus concept IDs linked
- Clinical trials include NCT numbers
4. Bias Mitigation
4.1 Rare Cancer Coverage
- 438 rare cancer types in knowledge base
- No commercial bias (all cancers regardless of market size)
- Continuous expansion based on research need, not profit potential
- Equal treatment of ultra-rare vs. less-rare cancers
4.2 Geographic Bias
- Clinical trials database includes global trials (not US-only)
- Literature search not limited to English (PubMed multilingual)
- ORPHANET provides European rare disease coverage
4.3 LLM Bias
- Anthropic Claude trained with Constitutional AI (bias mitigation built-in)
- Monitored for demographic, gender, or racial bias
- User reporting encouraged (see Contact section)
- Quarterly bias audits on sample conversations
5. Limitations & Transparency
5.1 What Zeta Can Do
- ✓ Synthesize medical literature across databases
- ✓ Identify relevant clinical trials
- ✓ Suggest drug repurposing hypotheses
- ✓ Calculate molecular properties (cheminformatics)
- ✓ Generate knowledge graphs of relationships
5.2 What Zeta Cannot Do
- ✗ Diagnose patients or prescribe treatments
- ✗ Replace clinical trials or experimental validation
- ✗ Guarantee research success or therapeutic efficacy
- ✗ Access real-time unpublished proprietary data
- ✗ Provide FDA-approved clinical decision support
5.3 Known Limitations
- Hallucination risk (LLM-generated errors and confabulations)
- Data staleness (quarterly clinical trials updates, not real-time)
- Context window constraints (200K tokens ≈ 150K words max)
- No access to proprietary drug development data
- Dependency on third-party API availability
6. Safety Monitoring
6.1 Incident Reporting
Users can report:
- Factual errors in AI responses
- Dangerous or harmful outputs
- Bias or unfair treatment
- Privacy concerns or data leaks
Email: contact@withzeta.ai
Response Time: 48 hours for all safety reports
6.2 Continuous Evaluation
- Monthly conversation audits (sample-based, 100 conversations/month)
- Quarterly accuracy benchmarking against gold-standard datasets
- User satisfaction surveys (NPS tracking)
- External security audits (annual penetration testing)
7. Compliance
7.1 Regulatory Framework
- NOT an FDA-regulated medical device (research tool)
- GDPR-compliant data practices (EU users)
- CCPA-compliant (California users)
- SOC 2 Type II audit planned (2026)
7.2 Ethical Oversight
- Internal AI Ethics Committee (quarterly meetings)
- External advisory board (planned for 2026)
- Alignment with NIH AI research guidelines
8. Environmental Impact
8.1 Carbon Footprint
- AWS carbon-neutral hosting (us-east-2 region, Ohio)
- Serverless architecture (efficient resource use, auto-scaling)
- Model inference via Anthropic (external carbon offset program)
8.2 Sustainability Practices
- Minimize unnecessary LLM calls (caching, deduplication)
- Efficient database queries (indexed searches)
- Right-sized compute resources (no over-provisioning)
9. Future Commitments
9.1 Roadmap
- Red-teaming exercises (Q2 2026) - adversarial testing by security researchers
- Adversarial testing framework (Q3 2026) - automated attack detection
- Public AI safety report (annual) - transparency on incidents and improvements
- External ethics board (2026) - independent oversight
9.2 Community Engagement
- Open-source contribution (selected tools on GitHub)
- Academic partnerships (collaborations with universities)
- Conference presentations on AI safety in medical research
10. Contact
AI Safety Team
Email: contact@withzeta.ai
Address: Lantern Pharma Inc., Dallas, Texas, United States
Responsible AI Officer: Reed Bender (reed@lanternpharma.com)
Last Updated: January 16, 2026
This policy is reviewed quarterly and updated as practices evolve.