Responsible AI Policy
Effective Date: January 1, 2025 · Last Updated: January 1, 2025
1. Our Commitment
BuiltByBas uses artificial intelligence as a tool to enhance our services, never as a replacement for human judgment, creativity, or accountability. We believe AI should be transparent, fair, and always under human control.
Our core principle: AI assists. It does not decide. Every AI-generated output is reviewed and approved by a human before it reaches any client.
2. How We Use AI
We use AI (specifically, Anthropic's Claude) to assist with the following:
| Use Case | What AI Does | Human Oversight |
|---|---|---|
| Proposal drafting | Generates initial scope, timeline, and cost estimates from your intake form | Bas reviews and edits every proposal before delivery |
| Project estimates | Suggests timeline and budget ranges based on project complexity | All estimates reviewed and adjusted before presenting |
| Content generation | Creates marketing copy and content drafts for client projects | All content reviewed and edited before client delivery |
| Intake scoring | Evaluates project readiness and scope clarity using objective criteria | Rule-based algorithm (no external AI); all scores reviewed |
3. Human Review Gates
No AI-generated content reaches a client without human review. This is a non-negotiable standard:
- Every proposal is read, edited, and approved by Bas Rosario before delivery
- Every estimate is validated against real-world project experience
- Every piece of generated content is reviewed for accuracy, tone, and fairness
- The human reviewer has full authority to modify, reject, or override any AI output
- The human reviewer, not the AI, is accountable for the final product
4. Your Data and AI
What we send to AI
- Your intake form responses (business info, project requirements, goals)
- Project type, scope, and service category
- Non-sensitive business context needed for generation
What we NEVER send to AI
- Passwords or authentication credentials
- Payment information (credit cards, bank details)
- Social Security numbers or government-issued IDs
- Personal health information
- Any data you have explicitly marked as confidential
AI provider data handling
Our AI provider (Anthropic) operates under a zero-retention API policy. Your data is not stored by Anthropic and is not used to train their models. Data is processed in real-time and discarded after the response is generated.
For full details on data collection and handling, see our Privacy Policy.
5. Fairness and Bias Prevention
We design our systems to be fair and unbiased:
- Project scoring and prioritization are based on objective criteria: readiness, scope clarity, engagement level, and feasibility
- Explicitly excluded from scoring: client name, email, industry, company size, budget amount, location, gender, age, race, ethnicity, or any protected characteristic
- All AI-generated proposals are reviewed for neutral, professional language before delivery
- We screen all project requests against ethical criteria and decline projects involving surveillance without consent, deceptive practices, discrimination, dark patterns, or exploitation
6. Transparency
We believe in honesty about AI usage:
- If you ask whether AI was involved in any part of your project, we will always give you an honest answer
- Every proposal includes a line confirming it was reviewed and approved by Bas Rosario
- Our scoring criteria are documented and explainable. You can request a breakdown of how your submission was assessed
- This policy is publicly available and referenced in our proposals and communications
7. Your Rights
As a client or prospective client, you have the right to:
- Request that AI not be used in any part of your specific project (we will accommodate this wherever feasible)
- Request a human-only review of any AI-generated content related to your project
- Request an explanation of any AI-assisted assessment or scoring of your submission
- Request deletion of any data sent to AI providers on your behalf (per Anthropic's zero-retention policy, this data is not stored, but we will confirm)
8. Incident Response
If AI produces harmful, incorrect, or biased output:
- Our human review gates exist to catch problems before they reach clients
- If problematic content is caught in review, it is corrected or regenerated, never sent
- If an issue reaches a client, we will contact you directly within 72 hours, correct the record, and take steps to prevent recurrence
- All incidents are logged and reviewed to improve our processes
9. Compliance
This policy is designed to comply with:
- California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA)
- EU General Data Protection Regulation (GDPR)
- EU AI Act (2024)
- OECD AI Principles and UNESCO Recommendation on AI Ethics
All current BuiltByBas AI use cases fall within the “limited risk” category under the EU AI Act. No high-risk AI systems are deployed.
10. Policy Review
This policy is reviewed:
- Quarterly as part of standard governance
- Immediately when a new AI feature is added or an existing feature changes
- Upon request from any client or regulatory authority
- After any incident involving AI-generated output
11. Contact
For questions about our AI practices or to exercise your rights under this policy:
BuiltByBas
Email: bas@builtbybas.com
Website: builtbybas.com
Location: California, United States