← Industries/Healthcare
Case Study

From 3 Hours of Paperwork to 20 Minutes of AI

How Pacific Valley Health Network deployed IntelligenceAmplifier.AI to transform clinical documentation, accelerate compliance, and reclaim physician time — entirely within a HIPAA-compliant, on-premise infrastructure.

68%
Reduction in documentation time
18 min
Full compliance audit prep via AI
$1.8M
Projected annual savings
14 wks
Kickoff to full production
01
Executive Summary

The Situation at a Glance

Pacific Valley Health Network (PVHN) is a regional healthcare organization comprising four acute-care hospitals, eleven outpatient clinics, and a 1,400-member workforce spanning physicians, nurses, administrative staff, and compliance officers. Despite investing heavily in electronic health record (EHR) systems and operational technology over the prior decade, the organization faced a deepening crisis of administrative burden.

Clinical staff were spending an estimated 3.2 hours per day on documentation, prior authorization, and policy retrieval — time that came directly at the expense of patient care. Compliance officers required an average of 2.8 weeks to prepare materials for a single regulatory audit. New staff onboarding averaged 11 weeks before employees felt confident navigating internal policies and clinical protocols.

In early 2024, PVHN engaged arvintech to deploy IntelligenceAmplifier.AI — a private, on-premise AI assistant trained exclusively on PVHN's internal documents, protocols, and operational knowledge. The deployment took 14 weeks from kickoff to full production rollout. Within 90 days of launch, PVHN had reduced clinical documentation time by 68%, cut compliance preparation from weeks to hours, and projected $1.8 million in annual operational savings.

This case study documents the full technical architecture, AI preparation methodology, deployment workflow, and measured outcomes of that engagement.


02
The Challenge

Four Interconnected Problems

PVHN's leadership had identified four distinct but interconnected operational problems, all rooted in the same underlying issue: the organization's institutional knowledge was trapped inside documents that humans had to read, search, and manually synthesize.

3.2 hrs/day

Physician Documentation Burden

Each physician spent an average of 3.2 hours per day on documentation, prior authorization, and policy lookup — time directly subtracted from patient care hours.

2.8 weeks

Compliance Preparation Time

Preparing materials for a single regulatory audit required 2.8 weeks of work across three compliance officers, with high risk of missed documentation gaps.

11 weeks

New Staff Onboarding Duration

New clinical and administrative staff averaged 11 weeks before independently navigating PVHN's extensive policy and protocol library with confidence.

$2.4M/yr

Administrative Overhead Cost

The cumulative cost of manual document retrieval, redundant administrative tasks, and rework from prior authorization denials reached $2.4 million annually.

The root cause was not a lack of documentation — PVHN had meticulously maintained clinical protocols, compliance manuals, administrative policies, credentialing documents, and patient communication templates. The problem was access and synthesis. Staff could not quickly retrieve the right information, cross-reference it with context, or draft actionable outputs without spending significant manual effort.

Traditional search tools returned document links, not answers. The EHR system contained patient data but no organizational intelligence. An internal wiki had been attempted but abandoned due to poor adoption. What PVHN needed was not more documents — it needed an AI system that could read, understand, and reason across all of them simultaneously.


03
Solution Overview

A Private AI Brain for the Entire Organization

arvintech proposed and deployed IntelligenceAmplifier.AI as a closed-loop, private AI deployment. The critical design principle: the system would be trained exclusively on PVHN's own documents and would run entirely within PVHN's infrastructure. No patient data, clinical records, or proprietary documents would ever leave PVHN's network or be transmitted to external AI providers.

The architecture centered on a Retrieval-Augmented Generation (RAG) model — a design pattern where a large language model is paired with a private vector database containing embeddings of the organization's documents. Rather than relying on the AI's general training, every response is grounded in PVHN's actual policies, protocols, and knowledge base. The AI doesn't guess — it retrieves and synthesizes.

Four primary use cases were scoped for the initial deployment:

  1. Clinical Documentation Assistant — AI-drafted clinical notes, discharge summaries, and care plans based on physician prompts
  2. Compliance & Policy Q&A — Instant answers to regulatory and policy questions, with source citations
  3. Prior Authorization Drafting — Automated drafting of insurance pre-authorization letters using clinical context
  4. Staff Onboarding Knowledge Base — An AI guide through PVHN procedures, protocols, and orientation materials

A fifth use case — AI-assisted patient communication drafting — was added during the second sprint after nursing staff identified it as a high-value opportunity.


04
Tech Stack

The Complete Technical Architecture

The IntelligenceAmplifier.AI deployment for PVHN is a multi-layer system. Each layer was selected for healthcare-grade security, performance at scale, and seamless integration with existing PVHN infrastructure.

1AI & Language Model Layer

LLM Engine
Private LLaMA 3.1 70B (quantized, on-premise)
Open-weight model enables full on-premise deployment; no data leaves PVHN infrastructure. Quantized to 4-bit for GPU efficiency without meaningful quality loss.
Embedding Model
BGE-M3 (BAAI) — multilingual dense retrieval
State-of-the-art embedding quality for medical terminology. Handles mixed-language clinical documents and abbreviation-dense protocol texts.
RAG Framework
LangChain + custom retrieval pipeline
Provides structured query decomposition, multi-hop retrieval for complex compliance questions, and citation tracking back to source documents.
Inference Server
vLLM with PagedAttention
Enables concurrent handling of 40+ simultaneous queries with sub-3-second response times — critical for morning rounds peak load.

2Vector Database & Retrieval

Vector Store
Weaviate (self-hosted, PVHN datacenter)
Horizontally scalable, supports hybrid keyword + semantic search, and stores metadata (document type, department, last-updated) alongside embeddings for filtered retrieval.
Chunking Strategy
Semantic chunking — 512 tokens, 64-token overlap
Medical documents require context-preserving chunks. Overlap prevents clinical instructions from being split at section boundaries.
Reranking
Cohere Rerank (self-hosted) — cross-encoder
Reranks top-20 retrieved chunks to top-5 before LLM context injection, significantly improving answer precision on policy documents.

3Document Ingestion Pipeline

PDF Extraction
Apache Tika + custom OCR (Tesseract 5)
PVHN's document library includes scanned forms, signed PDFs, and structured EHR exports. Multi-engine extraction handles all formats.
Document Classification
Fine-tuned DistilBERT classifier (12 categories)
Automatically tags documents as policy, protocol, compliance, administrative, clinical, onboarding, etc. — enabling role-based retrieval filtering.
Update Pipeline
Apache Airflow — nightly differential sync
Monitors source document repositories for changes, re-ingests updated documents, and invalidates stale vector embeddings within 24 hours of source update.
PII / PHI Scrubbing
Microsoft Presidio (on-premise)
Scans all documents before ingestion to identify and mask PHI fields. Training data contains zero patient-identifiable information.

4Integration & Infrastructure

EHR Integration
HL7 FHIR R4 API bridge (Epic)
Allows the AI interface to pre-populate clinical context (patient age, diagnosis codes, current medications) into documentation prompts — without storing PHI in the AI layer.
Authentication
SAML 2.0 via Azure AD (PVHN SSO)
Staff access IntelligenceAmplifier.AI using their existing PVHN credentials. No separate login. RBAC enforced at the SSO layer.
Compute (Primary)
2× NVIDIA A100 80GB SXM4 (on-premise)
Dedicated GPU cluster provisioned by ArvinTech for LLM inference. Located in PVHN's existing secure datacenter under PVHN physical control.
Compute (Failover)
Azure Government Cloud (HIPAA BAA)
Private cloud failover for high-availability SLA. Azure Government provides HIPAA-eligible infrastructure with a signed Business Associate Agreement.
UI Layer
Next.js 14 — embedded in PVHN intranet
Deployed as a widget accessible within PVHN's existing intranet portal. Zero new login screens for staff. Mobile-responsive for nursing station tablets.

A key architectural decision was the choice of a hybrid deployment model. The vector database and embedding pipeline run on PVHN's private servers, ensuring that no document content is ever externalized. The LLM inference layer runs on a dedicated on-premise GPU cluster provisioned by arvintech, with a private cloud failover for high-availability during peak load periods such as morning rounds and end-of-shift documentation.

All inter-service communication is encrypted with TLS 1.3. The system operates within PVHN's existing network security perimeter, authenticated via SAML 2.0 single sign-on integrated with PVHN's Active Directory. Role-based access control (RBAC) ensures that physicians, nurses, compliance officers, and administrators each see only the document domains relevant to their role.


05
AI Preparation

Eight Weeks of Groundwork Before a Single Query

The most common mistake organizations make when deploying AI is underinvesting in data preparation. AI does not magically extract value from messy, poorly organized documents. The quality of the AI's responses is directly proportional to the quality of its knowledge base. For PVHN, arvintech ran a structured eight-week preparation phase before any AI model was trained or tested.

1
Week 1–2

Document Audit & Inventory

ArvinTech conducted a full audit of PVHN's document repositories: SharePoint, shared drives, the EHR document library, and department-specific folders. The audit identified 14,847 documents across 23 document types.

  • 14,847 total documents inventoried across 6 source systems
  • 23 distinct document categories identified and mapped to user roles
  • 3,412 documents flagged as outdated (last modified >3 years ago) and excluded
  • 847 documents identified as compliance-critical and prioritized for first ingestion
2
Week 3

Data Quality Assessment

Each document was evaluated on four quality dimensions: extractability (can text be reliably extracted?), completeness (are documents missing sections?), accuracy (is content current and approved?), and clarity (is language precise enough for AI retrieval?).

  • 23% of documents required remediation before ingestion
  • 8% were scanned images requiring OCR processing
  • 11% had inconsistent section formatting requiring normalization
  • 4% contained embedded tables that required extraction restructuring
3
Week 4

PHI Scrubbing & HIPAA Review

Before any document entered the AI pipeline, every file was processed through Microsoft Presidio for automated PHI detection, followed by manual review of flagged items by PVHN's Privacy Officer. Documents containing clinical case examples with patient details were either de-identified or replaced with synthetic examples.

  • 2,341 documents processed through automated PHI detection
  • 184 documents contained PHI — all de-identified or synthetic replacements created
  • Privacy Officer sign-off obtained on full training corpus
  • HIPAA Technical Safeguard review completed for AI infrastructure
4
Week 5–6

Document Processing & Embedding

Approved documents were processed through the ingestion pipeline: text extraction, semantic chunking, embedding generation, and vector database indexing. The embedding process generated 187,430 vector embeddings from the final document corpus.

  • 11,436 documents approved for ingestion after audit and scrubbing
  • 187,430 vector embeddings generated across all document chunks
  • Average document processing time: 4.2 seconds per document
  • Total ingestion pipeline runtime: 13.4 hours (overnight batch)
5
Week 7

Retrieval Quality Testing

A set of 240 test queries was developed by PVHN's clinical leads, compliance team, and department managers — representing real questions staff would ask the system. Each query was evaluated for retrieval precision (did the right documents surface?) and answer quality (was the AI response accurate and actionable?).

  • 240 test queries across 4 use case domains
  • Initial retrieval precision: 73% (target: 90%+)
  • Identified 3 document categories with poor chunk boundaries — re-chunked
  • Identified 2 terminology gaps — added PVHN-specific medical abbreviation glossary
6
Week 8

Tuning, Prompt Engineering & Re-testing

Based on test results, ArvinTech refined the retrieval pipeline (adjusting chunk overlap, adding metadata filtering), optimized system prompts for each use case domain, and re-tested the full query set. Final retrieval precision reached 93.4% before production go-live.

  • Retrieval precision improved from 73% to 93.4% through pipeline tuning
  • Use-case-specific system prompts written and tested for 5 workflow types
  • Response latency optimized: P95 latency reduced from 8.2s to 2.7s
  • Clinical lead sign-off obtained on answer quality across all test domains

The 80/20 of AI preparation: In our experience across deployments, 80% of poor AI performance traces back to document quality issues, not model limitations. For PVHN, 23% of ingested documents required remediation before they could be used as training data. Identifying and fixing these issues before deployment — not after — is what separates an AI that frustrates users from one they trust.


06
AI Workflow

How Every Query Flows Through the System

Understanding the AI workflow is critical to understanding why IntelligenceAmplifier.AI produces reliable, citation-backed answers rather than hallucinated responses. The system uses a Retrieval-Augmented Generation (RAG) pipeline with a six-stage processing flow for every user query.

1

Query Intake & Role Verification

Staff member submits a query through the IntelligenceAmplifier.AI interface embedded in the PVHN intranet. The system immediately verifies the user's role via the SAML token — a physician sees clinical domains, a compliance officer sees regulatory domains, an administrator sees operational domains.

JWT decoded → role extracted → document domain filter applied → query passed to retrieval pipeline

2

Query Decomposition

Complex queries are automatically decomposed into sub-questions. A question like "What are the steps for discharging a patient with congestive heart failure and what forms need to be completed?" is decomposed into two parallel retrieval paths: discharge procedure and documentation requirements.

LLM sub-query generation → 1–4 parallel retrieval paths → async retrieval execution

3

Semantic Retrieval

Each sub-query is converted to a vector embedding and used to search the Weaviate vector database. The system performs hybrid search — combining dense semantic retrieval with sparse keyword matching — to surface the top-20 most relevant document chunks.

BGE-M3 embedding → Weaviate hybrid query (alpha=0.7 semantic / 0.3 keyword) → top-20 chunks returned with metadata

4

Reranking

The top-20 chunks from retrieval are passed through a cross-encoder reranking model that evaluates each chunk's relevance to the original query more precisely than the retrieval model could. The top-5 chunks advance to the generation stage.

Cohere cross-encoder reranker → chunks scored 0–1 → top-5 selected → source metadata preserved

5

Response Generation

The top-5 chunks are injected into the LLM context window alongside a role-specific system prompt. The LLM generates a response grounded exclusively in the provided context — it is instructed never to use general knowledge when contradicted by PVHN's documents.

System prompt + retrieved context + user query → LLaMA 70B inference → streamed response generation → citation markers injected

6

Citation & Output

The final response is returned to the user with inline citations linking to the specific document sections used. Staff can click any citation to open the source document. Every response is logged for audit purposes.

Response + citations → UI rendering → audit log written (user_id, timestamp, query_hash, doc_sources) → no query content stored

Clinical Documentation Workflow: End-to-End

The most impactful workflow deployed at PVHN was the Clinical Documentation Assistant. Here is a detailed trace of how a physician's documentation session flows through the system:

Clinical Documentation — Workflow Trace
1
Physician
Completes patient encounter and opens IntelligenceAmplifier.AI documentation assistant from within the Epic EHR sidebar. Patient context (age, gender, diagnosis codes, active medications) is pre-populated via FHIR API — physician does not need to re-enter.
0 seconds — context pre-populated at interface open
2
Physician
Types a brief dictation note: "62-year-old female, T2DM, presenting with HbA1c of 9.2, increasing metformin to 2000mg/day, ordering quarterly labs, counseled on diet. Generate discharge summary."
~30 seconds physician input time
3
AI System
Retrieves PVHN's T2DM management protocol, discharge summary template, and metformin dosing guidelines from the knowledge base. Generates a complete SOAP-format discharge summary using PVHN's standard template, incorporating the physician's dictated information.
2.1 seconds (P50 latency)
4
Physician
Reviews the generated discharge summary. Makes one edit — adjusts the follow-up interval from 4 weeks to 6 weeks based on patient preference. Signs and locks the note.
~45 seconds review and approval
5
AI System
Detects metformin dose increase and automatically surfaces PVHN's renal function monitoring protocol, noting that eGFR should be checked before dose increase. Physician acknowledges — lab order already placed.
0.8 seconds proactive safety flag
6
System
Completed note is written back to Epic via FHIR API. Interaction logged to audit trail. Total elapsed time from encounter completion to signed note: 3 minutes 48 seconds.
Previous baseline: 22 minutes average

Compliance Automation Workflow

The compliance use case operates on a slightly different workflow model. Rather than real-time conversational queries, compliance officers work through structured audit preparation sessions. The AI processes a regulatory checklist against PVHN's internal policy documents, identifies gaps, and generates a gap analysis report with specific policy citations and recommended remediation actions.

Prior to deployment, preparing for a Joint Commission survey required a compliance team of three officers working for 2.8 weeks. After deployment, the same preparation takes 3.5 hours for one officer, with the AI generating the initial gap analysis in 18 minutes across 847 compliance requirements.

Prior Authorization Workflow

Prior authorization letters require the physician to articulate clinical necessity using specific language that satisfies insurer criteria — criteria that change frequently across dozens of payers. The AI workflow for this use case:

  1. Physician selects procedure and target payer from a dropdown integrated with the AI interface
  2. System retrieves payer-specific criteria documents from the knowledge base
  3. Physician provides a brief clinical summary (2–3 sentences)
  4. AI drafts a complete prior authorization letter meeting payer language requirements
  5. Physician reviews, edits if needed, and submits directly from the interface

Mean time from clinical summary to completed draft: 43 seconds. Prior authorization approval rates increased 22% in the first quarter post-deployment, attributed to more consistent use of medically necessary language.


07
Implementation Timeline

14 Weeks from Kickoff to Production

Week 1
Project Kickoff & Stakeholder Alignment
Engaged clinical leads, compliance, IT, and department managers. Established use case priority, success metrics, and data access protocols.
Week 2
Document Audit Begins
ArvinTech team embedded with PVHN IT to inventory all document repositories. Source system access provisioned.
Week 3–4
Data Quality & HIPAA Review
PHI scrubbing, document quality remediation, and Privacy Officer review completed. 11,436 documents approved for ingestion.
Week 5
Infrastructure Deployment
GPU cluster provisioned in PVHN datacenter. Weaviate, vLLM, and supporting services deployed. Network security review completed.
Week 6
Document Ingestion & Embedding
187,430 embeddings generated. Vector database indexed. Nightly sync pipeline activated.
Week 7
Alpha Testing with Clinical Champions
12 clinical champions (3 per use case domain) ran structured testing. 240 test queries evaluated. Retrieval precision: 73%.
Week 8
Pipeline Tuning & Prompt Engineering
Chunking, metadata filtering, and system prompts refined. Retrieval precision improved to 93.4%. P95 latency reduced to 2.7s.
Week 9–10
Pilot Rollout — Two Departments
Full deployment to Internal Medicine and Compliance teams (87 users). Real-world feedback gathered. Three minor prompt refinements made.
Week 11–12
Expanded Rollout — All Clinical Staff
System opened to all 1,400 PVHN staff. Training sessions delivered. Help desk support provided by ArvinTech for 2 weeks post-expansion.
Week 13–14
Stabilization & Handover
System monitoring handed over to PVHN IT. ArvinTech ongoing support SLA activated. Baseline metrics collection completed. Project formally closed.

08
Security & Compliance

HIPAA Compliance by Architecture, Not Policy

Healthcare AI deployments must navigate a compliance landscape that general-purpose AI tools are architecturally unfit for. Commercial AI APIs — which send data to external servers for processing — create fundamental HIPAA violations when handling Protected Health Information (PHI). PVHN's deployment was designed from the ground up to eliminate this risk entirely.

Zero External Data Transmission
All LLM inference occurs on PVHN's on-premise GPU cluster. No query content, document text, or patient context is ever sent to an external API.
PHI-Free Training Corpus
All training documents were processed through automated PHI detection and manual Privacy Officer review. Zero patient-identifiable information exists in the vector database.
AES-256 Encryption at Rest
All vector embeddings, document metadata, and system logs are encrypted at rest using AES-256. Encryption keys managed by PVHN's existing key management infrastructure.
TLS 1.3 in Transit
All inter-service communication within the AI platform — between the UI, API gateway, retrieval pipeline, and LLM inference — is encrypted with TLS 1.3.
Role-Based Access Control
Document access is segmented by staff role at the retrieval layer. A nurse cannot retrieve documents outside their domain regardless of query content. Enforced via SAML attributes.
Immutable Audit Logging
Every AI interaction is logged to an append-only audit trail retained for 7 years. Logs contain user identity, timestamp, and document sources — never query content or AI responses.
Business Associate Agreement
ArvinTech operates as a HIPAA Business Associate under a signed BAA with PVHN, covering all deployment, maintenance, and support activities.
Penetration Testing
The deployed system underwent third-party penetration testing prior to production go-live. Zero critical or high-severity findings were identified.

A formal HIPAA Technical Safeguard review was conducted by PVHN's Privacy Officer in partnership with arvintech prior to go-live. The review covered access controls, audit controls, integrity, person or entity authentication, and transmission security — all six required technical safeguards under 45 CFR §164.312. The deployment passed without findings requiring remediation.

All AI interactions are logged with user identity, timestamp, query hash (not query content), and document sources retrieved. Logs are retained for seven years per HIPAA requirements and stored in an immutable, append-only audit trail.


09
Results & Outcomes

Measured Outcomes at 90 Days

PVHN established a measurement framework at project kickoff to capture baseline metrics across all four use cases. The following outcomes were measured at the 90-day post-deployment mark, using the same methodology as the baseline assessment.

68%
Reduction in Documentation Time
Average physician documentation time dropped from 3.2 hours to 1.02 hours per day. Measured across 47 physicians over 90 days post-deployment.
18 min
Compliance Audit Preparation
The AI processes 847 Joint Commission requirements against PVHN policies in 18 minutes. Previously took 2.8 weeks with a 3-person team.
22%
Prior Auth Approval Rate Increase
More consistent use of medically necessary language in AI-drafted letters improved first-submission approval rates from 71% to 87%.
43 sec
Prior Auth Draft Generation
From clinical summary to completed draft letter. Previously averaged 34 minutes of physician and administrative staff time.
94%
Staff Satisfaction Score
94% of active users rated IntelligenceAmplifier.AI as "very useful" or "indispensable" in the 90-day survey. Adoption rate reached 89% of eligible staff.
$1.8M
Projected Annual Savings
Blended calculation of physician time recaptured, reduced denial rework, faster compliance preparation, and accelerated onboarding across all 1,400 staff.

Qualitative Feedback

Beyond quantitative metrics, PVHN conducted structured interviews with 86 staff members across all user groups at the 60-day mark. Themes that emerged consistently:

  • "I used to dread the end of my shift because I had two hours of documentation waiting for me. Now I finish my notes before I leave the floor. The AI draft is accurate enough that I'm usually just reviewing, not rewriting."
    Internal Medicine Physician, PVHN Hospital 2
  • "The compliance team used to be overwhelmed before every audit. Now I actually feel prepared. I ran our last Joint Commission prep in half a day and found two policy gaps I would have missed manually."
    Director of Compliance, Pacific Valley Health Network
  • "New nurses used to come to me with basic policy questions every day for months. Now they ask the AI first. I can tell the difference — they're more confident faster, and the questions I get are the harder ones that actually need a senior nurse."
    Charge Nurse, ICU — PVHN Hospital 1

10
Key Learnings

What We Would Do Differently — and What We Would Do Again

Every deployment of IntelligenceAmplifier.AI generates insights that we carry into future engagements. The PVHN deployment was our most complex healthcare implementation to date, and it produced several lessons worth sharing.

✓ Do Again

Embed clinical champions early and deeply

The 12 clinical champions who tested the system in Week 7 became the most effective advocates for adoption. Their real-world feedback drove the most important prompt refinements. Future deployments should increase the champion cohort and extend their involvement through go-live.

✓ Do Again

Invest heavily in the document audit before touching AI

Eight weeks of preparation before a single embedding was generated felt conservative. In retrospect, it was the single most valuable phase of the project. The 23% document remediation rate validated the investment.

↺ Improve

Plan for EHR integration earlier

The Epic FHIR integration was scoped as a stretch goal and ultimately delivered in Week 12 — two weeks later than planned. EHR API access requires multi-stakeholder approvals that take time. Future healthcare deployments will initiate the integration approval process in Week 1.

↺ Improve

Create role-specific onboarding materials, not one universal guide

The initial staff training used a single onboarding guide for all roles. Physicians engaged well; administrative staff reported confusion. Subsequent rollout used role-specific 10-minute video guides and adoption improved measurably in the departments trained with role-specific materials.

✓ Do Again

On-premise architecture eliminated the largest barrier to adoption

PVHN's security team had previously blocked two AI pilot programs that relied on external APIs. The on-premise architecture bypassed every objection — the Privacy Officer and CISO both approved the deployment at the architecture review stage, before any testing began. Private deployment is not just a technical choice; it is a trust strategy.

The PVHN deployment validated a core principle of IntelligenceAmplifier.AI: the value of an AI system in healthcare is not determined by the sophistication of the model, but by the quality of the organizational knowledge it is trained on, and the discipline with which it is integrated into real clinical workflows. AI that exists in isolation from how people actually work does not get used. AI that meets people where they are becomes indispensable within weeks.


Deploy AI in Your Healthcare Organization

Every hospital, clinic, and health network has a unique knowledge base. We'll deploy AI trained on yours — securely, privately, and fully HIPAA-compliant.

Deployment and ongoing support by arvintech — Managed IT & AI Services Since 2000