Case Study: How One County Delivered 100% Accessible Meetings in 60 Days

A White Paper for City & County Clerks

Prepared by Convene Research and Development

Conference supported by translation and interpretation services

Scope and Purpose — This case study documents how a mid‑sized county (≈ 350,000 residents; five‑member board) achieved fully accessible public meetings within 60 days—without adding staff. The program aligned ADA Title II effective‑communication requirements with language‑access expectations under Title VI and DOJ’s WCAG‑anchored web rule. It details baseline gaps, the acceleration plan, role assignments, procurement, KPIs, and the evidence kept to demonstrate compliance and public value.

1. Executive Summary

Key Results — Caption latency fell from ~3.4 s to ≤ 1.7 s; archive caption accuracy rose from ~88% to ≥ 96%; interpreter fill increased from 82% to ≥ 99%; PRA retrieval time dropped from ~2 hours to ≤ 25 minutes; complaints decreased by 61%; average public‑comment participation increased by 18%.

Table 1A. Before/After Metrics (Illustrative)

Metric Baseline Day 60 Evidence
Caption latency (live)
3.4 s (avg)
1.7 s (avg)
Encoder logs
Archive accuracy
88%
96%
WER samples
Interpreter fill
82%
99%
Roster/confirmations
PRA retrieval
2:00 hrs
0:25 hrs
Bundle index
Complaints/mo
23
9
Clerk log

Within 60 days, the county moved from reactive fixes to measurable parity in participation for people with disabilities and Limited English Proficient (LEP) residents. The program focused on five high‑leverage controls: intelligible audio, real‑time captions with post‑edit, qualified interpreters (ASL and spoken), accessible players and documents, and PRA‑ready archives. Success hinged on a clerk‑led governance model, outcome‑based SLAs, and a minimal KPI set reviewed weekly.

2. County Context and Baseline (Day 0)

Demographics & Language Distribution — ACS and school language surveys identified two Tier‑1 languages (Spanish, Vietnamese) and three Tier‑2 (Tagalog, Mandarin, Arabic). High‑contact programs included permits, public health, and planning hearings.

Table 2A. LEP Distribution (Illustrative)

Language Est. LEP Population Tier Primary Touchpoints
Spanish
38,000
Tier-1
Board; Planning; Public Health
Vietnamese
7,500
Tier-1
Public Health; Human Services
Tagalog
3,200
Tier-2
Permits; Clerk counter
Mandarin
2,600
Tier-2
Planning; Business
Arabic
2,100
Tier-2
Health; Human Services

The county runs three standing weekly public meetings, 8–15 ad hoc hearings per month, and maintains a public‑records archive stretching back ten years. Baseline issues included off‑axis microphones, inconsistent captions, no translated instructions for remote participation, and untagged PDF agendas. Public comment queues during remote participation lacked accessible labels and clear language‑selection guidance.

Table 1. Baseline Gap Summary (Day 0)

Area Observed Gap Impact
Audio
Inconsistent mic technique; clipping; room noise
Caption errors; interpreter strain
Captions
Unreliable enablement; latency spikes
Comprehension loss; complaints
ASL/Spoken interpretation
Ad hoc booking; no backup
No-shows; recesses
Web/PDF
Scanned agendas; missing alt text
Screen reader failure
Language access
English-only notices
LEP barriers to participation
Archives
No corrected captions; poor naming
PRA delays; rework

3. Legal Drivers and Success Criteria (Plain‑Language)

ADA Title II requires communication with people with disabilities to be as effective as with others; in meetings this often means captions, ALS, and qualified interpreters. Title VI and Executive Order 13166 require meaningful access for LEP individuals; DOJ’s web rule anchors meeting hubs and players to WCAG 2.1 AA. Success criteria were defined as: (1) ability for all participants to perceive and understand proceedings in real time; (2) complete archives with corrected captions and interpreter tracks; (3) translated instructions and vital notices for Tier‑1 languages; and (4) PRA‑ready bundles delivered within 30 minutes of request.

4. Methodology: The 60‑Day Acceleration Framework

The county followed a compressed plan: Days 0–10 audit; Days 11–20 procurement adjustments and quick wins; Days 21–45 operationalization of live controls and SOPs; Days 46–60 stabilization, training, and KPI hardening. Each workstream produced evidence artifacts to prove parity (screenshots, short clips, logs, and confirmation emails).

5. Governance & Roles (RACI)

The clerk served as executive sponsor and product owner. AV managed signal flow, captions, and interpreter routing; IT managed identity, security, and platform integrations; counsel validated thresholds and risk; Communications/Web managed WCAG conformance and language‑aware content. A quarterly access review was scheduled.

Table 2. RACI — Core Responsibilities

Function Responsible (R) Accountable (A) Consulted (C) Informed (I)
Policy & SLAs
Clerk
Clerk
Counsel
Board
Audio/Video Ops
AV
AV
IT
Clerk
Captions/Subtitles
AV
AV
Vendor
Clerk
Interpreters
Clerk
Clerk
AV/Vendor
Counsel
Web/WCAG
Comms/Web
Comms/Web
IT
Clerk
Archives/PRA
Clerk
Clerk
AV/Records
Counsel

6. Pre‑Implementation Audit (Days 0–10)

The team ran a structured audit covering notices/web, live ops, and archives. Each finding included a risk rating, owner, and fix. The audit produced a prioritized backlog that favored ‘vital’ content and high‑traffic pages; it also established sampling points for KPI monitoring during and after meetings.

Table 3. Audit Instrument (Excerpt)

Area Test Pass/Fail Owner Fix (if fail)
Player keyboard nav
Keyboard-only traversal
Fail
Comms
Replace player; label controls
Caption latency
Spot measure in live session
Fail
AV
Re-route encoder; vendor alert
Agenda PDF
Tagged + language set
Fail
Clerk
Retag; publish HTML mirror
Interpreter confirmation
24-hour window met
Pass
Clerk

7. Workstream A — Notices & Web (WCAG)

Implementation Details — Replaced image‑based agendas with tagged PDFs and HTML mirrors; standardized ‘lang’ attributes; added descriptive link text in target languages; and implemented nightly link checks to reduce broken links.

Table 4A. WCAG 2.1 AA — Priority Success Criteria

SC Title Why It Mattered Verification
1.4.3
Contrast (Minimum)
Legibility for captions/UI
Contrast checker pass
2.1.1
Keyboard
Assistive tech compatibility
Keyboard-only traversal
2.4.7
Focus Visible
Orientation and control
Visible focus outline
3.3.1
Error Identification
Forms for public comment
Announce errors

The county re‑platformed the meeting hub to an accessible player and rebuilt top pages with WCAG 2.1 AA guardrails. All agenda PDFs from the last six months were tagged, and HTML mirrors were posted for time‑sensitive items. Language assistance taglines and instructions were translated for Tier‑1 languages and linked from the hub.

Table 4. WCAG‑Mapped Checklist (Meeting Hub)

Area Requirement Test
Keyboard navigation
All controls operable
Keyboard-only pass
Contrast
Meets AA
Contrast checker
Labels
Meaningful & local-language
Screen reader readout
PDFs
Tagged; lang set; order
Tag tree inspection

8. Workstream B — Live Meeting Operations

Audio Chain — Cardioid goosenecks for dais, bodypacks for presenters, and boundary mics for lecterns. DSP presets controlled gain staging and light compression; operators monitored return feeds and echo cancellation in the conferencing layer.

Table 5A. Audio/DSP Settings (Guidance)

Stage Setting Notes
Mic input
-18 dBFS peak
Protect headroom
EQ
High-pass 80–100 Hz
Cut rumble
Dynamics
2:1 compression, slow knee
Level smoothing
AEC
Enabled; adapt slow
Avoid artifacts

AV standardized microphone types and placement, implemented conservative DSP profiles, and set caption latency targets (≤ 2.0 s). ASL picture‑in‑picture (PIP) was made persistent at ≥ 1/8 video height; spoken interpreter channels were labeled in the target language. A recess/resume SOP was adopted to restore parity within minutes during outages.

Table 5. Live Operations Controls

Control Target Action When Off-Nominal
Caption latency
≤ 2.0 s
Reboot path; switch provider; recess
ASL visibility
≥ 1/8 video height, always-on
Reframe; lock layout
Interpreter audio
Clean, labeled channels
Bridge backup; verify return
Remote comment
Accessible queue; equal time
Announce path; extend time

9. Workstream C — Language Access Program

Four‑Factor Application — Population size, frequency of contact, service importance, and available resources were scored 1–5 to rank languages. Promotion/demotion rules were documented with annual review and CBO input.

Table 6A. Four‑Factor Scoring (Sample)

Language Size Frequency Importance Composite Tier
Spanish
5
5
5
5.0
Tier-1
Vietnamese
4
3
4
3.7
Tier-1
Tagalog
3
2
3
2.7
Tier-2
Mandarin
3
2
3
2.7
Tier-2

Using a four‑factor analysis, the county set Tier‑1 languages (Spanish and Vietnamese) for full coverage across notices, interpreters at hearings, and translated post‑meeting summaries. Tier‑2 languages received translated vital documents and on‑demand phone interpretation; Tier‑3 relied on phone interpretation plus plain‑language English with pictograms until promotion was warranted.

Table 6. Language Tiering (Illustrative)

Tier Eligibility (either/or) Services Guaranteed
Tier-1
≥5% of population or ≥10k residents
Translated notices; interpreters at meetings; translated summaries; hotline
Tier-2
≥1% or ≥2k residents
Translated vital docs; on-request interpreters; web instructions
Tier-3
<1% and <2k
On-demand phone interpretation; pictograms + plain English

10. Workstream D — Archives & PRA Readiness

The county established a Meeting ID schema and automated bundling of the authoritative video, corrected captions (VTT/SRT), interpreter audio tracks, minutes, and exhibits. A simple index file enabled retrieval within 30 minutes, reducing PRA effort and improving transparency.

Table 7. PRA‑Ready Bundle (Minimum)

Asset Example Purpose
Video master
YYYY-MM-DD_BoardMeeting.mp4
Authoritative record
Captions/Subtitles
…_Captions.vtt / …_ES.srt
Search + access
Interpreter audio
…_Spanish.mp3
Participation parity
Minutes/Exhibits
…_Minutes.pdf; …_ExhibitA.pdf
Context

11. Technology Stack & Reference Architecture

Security & Privacy — Staff access uses SSO; archives are role‑based; vendor contracts prohibit model‑training on county content; and export rights for VTT/SRT and TMX/TBX are mandatory.

The stack covered capture (mics/DSP/cameras), encode/mix, captions and interpretation, distribution (WCAG‑compliant player), and archive/PRA. Interfaces were specified to be low‑latency, with open export formats for captions (VTT/SRT) and translation assets (TMX/TBX).

Table 8. Reference Architecture — Interfaces

Stage Key Interface Requirement
Capture
DSP to encoder
Low noise; headroom
Interpretation
Clean feed + talk-back
Low latency; labeled channels
Captioning
Audio to CART/ASR
Latency & accuracy targets
Distribution
Player UI
Keyboard operable; labeled toggles
Archive
Asset bundle
Meeting ID; VTT/SRT; interpreter tracks

12. Procurement & SLAs (Outcomes over Features)

Contracts were revised to prioritize measurable outcomes: interpreter fill ≥98% with ≤24‑hour response, caption latency ≤2.0 s live and archive accuracy ≥95% within 72 hours, player WCAG 2.1 AA conformance, and export rights for artifacts. Credits and corrective action plans were enforced for misses.

Table 9. Outcome‑Based SLA Clauses (Excerpt)

Outcome Target Remedy/Credit
Interpreter fill rate
≥ 98%
Backup at vendor cost
Caption accuracy (archive)
≥ 95% / 72 h
5–10% credit
Caption latency (live)
≤ 2.0 s
Credit + RCA
Player accessibility
WCAG 2.1 AA
Quarterly report

13. Training & Change Management

Quarterly micro‑trainings were scheduled for clerks, AV, and moderators. Laminated scripts were issued for opening, recessing, and resuming meetings when access degraded. A change log tracked platform updates, glossary additions, and SOP revisions, with ‘what’s new’ notes posted on the meeting hub.

14. Budget & ROI (Including Avoided Costs)

The project emphasized avoided costs—complaints, continuances, PRA labor, and vendor churn—over hardware purchases. Small jurisdictions can achieve parity with templates, on‑request interpreters, and post‑edit captions; mid‑size add RSI and quarterly QA; large implement redundancy and regional pools.

Table 10. Planning Ranges (Illustrative)

Line Item Small (≤25k) Mid (25k–250k) Large (≥250k)
Interpretation (ASL/spoken)
$10k–$25k
$25k–$70k
$70k–$160k
Captions (live+post)
$8k–$18k
$18k–$40k
$45k–$95k
Accessible player & web
$5k–$12k
$12k–$30k
$30k–$70k
QA & TMS
$3k–$10k
$10k–$25k
$25k–$55k

15. KPIs, Measurement, and Dashboard

The county tracked a compact KPI set: live caption latency, archive accuracy, interpreter fill rate, ALS checkout rate, broken‑link rate, and PRA retrieval time. Results were reviewed weekly during the 60‑day push and quarterly thereafter.

Table 11. KPI Dashboard (Core Set)

KPI Definition Target
Caption latency
Live delay
≤ 2.0 s
Archive accuracy
Post-edit %
≥ 95%
Interpreter fill
Confirmed/requested
≥ 98%
ALS coverage
Receivers in service
100% coverage
PRA retrieval
Time to bundle
≤ 30 min

16. Risk, Issues, and Incidents

A risk register identified interpreter no‑shows, caption outages, mislabeled channels, untagged PDFs, broken links, and platform downtime. Incident response used the recess/resume SOP and post‑incident root‑cause analysis with corrective actions.

Table 12. Risk Register (Excerpt)

Risk Likelihood Impact Mitigation
Interpreter no-show
Low
Medium
Backup roster; response windows
Caption outage
Med
High
Failover encoder; recess script
Mislabeled channels
Low
High
Pre-flight; on-screen labels
Untagged PDFs
Med
Med
Tagging; HTML mirrors
Broken links
Low
High
T-24h/T-1h checks

17. Outcomes: What “100% Accessible” Means

‘100% accessible’ was defined as: (a) captions enabled at gavel with ≤ 2.0 s live latency; (b) ASL PIP persistent and visible; (c) spoken interpretation available in Tier‑1 languages; (d) accessible player (keyboard, labels, contrast); (e) translated instructions posted pre‑meeting; and (f) archives published with corrected captions and interpreter tracks within 72 hours. Compliance was evidenced with screenshots, short clips, logs, and roster confirmations.

18. Lessons Learned & Recommendations

19. Limitations and Next Steps

This was an acceleration program; some deeper remediations (legacy archives, complex forms) require ongoing effort and regional partnerships. Next steps include regional interpreter pools, shared translation memory, and expanded reader/usability testing.

22. Equity & Community Engagement Outcomes

Partnered with CBOs to run reader tests (N=5 per language) and usability walk‑throughs of the meeting hub. Changes included simpler instructions, larger caption fonts, and clearer ASL PIP guidance. LEP participation in public comment increased by 23%.

23. Comparative Benchmarking vs Peer Counties

Compared caption visibility, interpreter availability, and archive completeness across five neighboring counties. The subject county moved from bottom third to top quartile by Day 60 on all measures.

24. Governance Artifacts (Excerpts)

Table 13A. Governance Artifacts — Where to Find Them

Artifact Location Owner Review Cycle
LAP Charter
Meeting hub / Governance
Clerk
Annual
Pre-Flight SOP
Meeting hub / SOPs
AV
Quarterly
KPI Dictionary
Meeting hub / Metrics
Clerk
Quarterly
Vendor CAP Template
Procurement / Exhibits
Clerk
As needed

Artifacts included a Language Access Program charter; SOPs for pre‑flight, live operations, and archive bundling; a KPI data dictionary; and a vendor corrective‑action template.

25. Sustainability Roadmap (12–24 Months)

Extend accessible practices to advisory bodies, pilot regional interpreter pools, and migrate legacy archives to corrected captions. Publish annual community scorecards and expand glossary/translation memory governance.

26. Frequently Encountered Pitfalls & Fix Patterns

Table 14A. Pitfalls → Fix Patterns

Pitfall Symptom Fix Pattern
Mislabeled channels
Wrong language heard
Label in target language; test return
Hidden captions
Subtitles obscured
Safe-area policy; UI lock
Scanned PDFs
Screen reader failure
Tagging SOP; HTML mirror
No interpreter backup
Recess/no-show
Backup roster; response windows

Mislabeled language channels, overlays hiding captions, and untagged last‑minute PDFs were the top issues. Fix patterns: enforce pre‑flight checklists, lock player layouts, and maintain an emergency HTML posting path for agendas.

27. Replication Playbook for Small Jurisdictions

Focus on Tier‑1 language notices; captions at gavel; on‑demand phone interpretation; corrected captions within 72 hours; and shared services for cost control.

20. Footnotes

[1] ADA Title II; 28 C.F.R. part 35 (Effective Communication).
[2] Title VI of the Civil Rights Act; Executive Order 13166 (LEP Access).
[3] DOJ Final Rule on Web Accessibility for State and Local Governments (WCAG 2.1 AA).

21. Bibliography

U.S. Department of Justice — ADA and LEP Guidance; Executive Order 13166 resources; W3C WCAG 2.1; National League of Cities/state municipal leagues; professional associations for interpreters and captioning (RID/NBCMI/ATA; NCRA/CART).

Appendix A. 60‑Day Timeline (Milestones)

The schedule emphasized parallel workstreams and weekly checkpoints with the Clerk as product owner.

Table A1. 60‑Day Plan (Illustrative)

Phase Days Milestones Evidence
Audit
0–10
Web/WCAG, live ops, archives
Findings log; backlog
Quick Wins
11–20
Enable captions; tag top PDFs; interpreters onboard
Screenshots; confirmations
Operationalize
21–45
ASL PIP, RSI routing, SOPs live
Clips; SOP sign-off
Stabilize
46–60
KPI checks; post-edit within 72 h; PRA bundles
Dashboards; bundle index

Appendix B. Pre‑Flight Checklists

Checklists covered T‑24h and T‑1h tasks for agendas/notices, interpreter logistics, caption routing, player UI review, ALS devices, and evidence capture.

Table B1. Pre‑Flight (Excerpt)

Area Task Owner Proof
Agendas/Notices
Translated & posted; instructions localized
Clerk/Comms
Links; screenshots
Interpreters
Confirm; test channel map
Clerk/AV
Roster; capture
Captions
Enable at gavel; verify delay
AV
Encoder stats
Player
Keyboard nav; labels; contrast
Comms
Screen reader test
ALS
Receivers charged; signage posted
AV
Inventory log

Appendix C. Moderator Scripts (Open/Recess/Resume)

Open: “Captions are enabled. Interpretation is available on the language menu. If you need assistance, please notify the clerk.”
Recess: “We are recessing to restore captions/interpretation so all may follow.”
Resume: “Access has been restored; we are resuming the meeting.”

Appendix D. KPI Data Dictionary

Defines metric names, collection methods, sampling cadence, and pass thresholds to support audits and vendor reviews.

Table D1. KPI Definitions (Core Set)

Metric Definition Method Target
Caption latency
Delay between speech and subtitle
Stopwatch or encoder stat
≤ 2.0 s
Archive accuracy
Post-edit caption %
WER sample (3×2-min)
≥ 95%
Interpreter fill
Confirmed / requested
Roster logs
≥ 98%
PRA retrieval
Time to assemble bundle
Timed drill
≤ 30 min

Table of Contents

Convene helps Government have one conversation in all languages.

Engage every resident with Convene Video Language Translation so everyone can understand, participate, and be heard.

Schedule your free demo today: