
Our Goal
In the fast-evolving landscape of AI, we saw an opportunity to revolutionize local election coverage in our newsroom by reducing manual, repetitive tasks so our journalists could focus on in-depth reporting and storytelling. Election season is our busiest time of year, and methodically copying and pasting candidate names and results from dozens of county, city and school board races was not an efficient use of our team’s time. We sought to use AI-powered data scraping and automation to streamline our workflow and provide real-time local election results to our readers — something not provided by AP, Ballotpedia or other large services focused only on top-of-ticket races and legislative contests. Our Election Hub was the result.
Table of Contents
- An Introduction to the Problem: Here’s how we identified the need for a smarter, AI-assisted workflow and the challenges that inspired this playbook.
- Challenges our newsroom faced covering dozens of local races.
- Our desire to prioritize storytelling over data entry.
- Scraping the Preview: Weeks before election night, the data scraper will pull all county ballot information into a spreadsheet.
- This information will be compiled and pulled onto a dashboard to preview all the local races and measures in 13 counties. This is not the real-time automation that we will use on election night.
- Election Command Center: Create a comprehensive, all-in-one dashboard that equips our readers with everything they need to navigate the election process across all 13 counties we cover in the greater SF Bay Area.
- This central hub will answer voter questions, provide key deadlines, polling locations, ballot details — ensuring every voter has the information they need at their fingertips.
- It will also include drop box locations and voting centers for each county, making it easier for voters to find where and how to cast their ballots.
- Then on Nov. 5, it will provide real-time election updates.
- Bringing the Ballot to Life: Generate a custom HTML layout to display ballot information in a user-friendly, searchable election interface.
- Each county will have a dedicated page for measures and candidates, clearly labeling each contestant by county or city race and providing detailed descriptions of all ballot measures.
- A data scraper will pull this information into a spreadsheet to be generated in the HTML layout. This will be the placeholder for election results and is what will be updated to reflect real-time results.
- Powering Real-Time Election Coverage: Scrape election data from all 13 counties’ registrar sites in real-time and pull the results to our dashboard.
- With each result update, the data scraper must generate custom code to extract election data from every county.
- The process involves seamlessly integrating the extracted data into the HTML layout displaying our ballot information on Election night.
- Reflection & Recommendations for Improvement: We interviewed election officials on the county and state level, small and big newsrooms, election software vendors and tech professionals to craft recommendations to improve this workflow and project for others looking to replicate it.
- Newsrooms: Advice on what to plan for during Election season and a guide on preparing staffers for Election night.
- We provided a few different options to explore and expand on this project.
- Ciara’s Post Script: This is a first-hand account from the intern, now a part of our staff, who played a crucial role in developing the AI generated dashboard.
- The behind-the-scenes of working with AI bots and the patience it took to troubleshoot every glitch.
- Her key takeaways from working at the intersection of AI and journalism for the first time.
- Resources & Leads: Here’s what we used to put this playbook together and other information to give context to the election process.
- A comprehensive collection of resources, reports, regulations, and research related to California’s voting systems and election technology.
- A compiled list of everyone we interviewed, including notes and related links.
1. An Introduction to the Problem
Here’s how we identified the need for a smarter, AI-assisted workflow and the challenges that inspired this playbook.
For journalists, election day is the ultimate event, our Super Bowl. In the weeks leading up to it, the news cycle runs nonstop, and voters eagerly refresh our site, searching for the latest insights to guide their decisions. It’s the biggest web and social media traffic surge of the year. During the critical three months before election day, we meet multiple times a week to strategize social content, refine ballot preview coverage, and most importantly, build a workflow to deliver real-time election results to our readers.
One question kept coming up: How many staff members would have to abandon their posts to manually copy and paste election results? It was a dilemma because our time and energy were better spent writing race result summaries and profiling leading candidates to provide meaning to go with the numbers. It’s true that journalists thrive under pressure, but not by choice and certainly not without a few headaches and premature gray hairs. That’s why we saw AI as a potential solution, one that could lighten the load and let us focus on the reporting that truly mattered.

This playbook is designed to help small newsrooms build a workflow that leverages AI to deliver real-time election results, without the need for a dedicated data or coding team. Our project wasn’t without its flaws or roadblocks, and we didn’t achieve every goal we set. However, it was an invaluable learning experience that revealed both the capabilities and limitations of AI, with takeaways that can be used by other small newsrooms. What we learned has already shaped and improved our approach to using AI in future projects. And if your newsroom isn’t using AI yet, it’s time to catch up because the industry is already moving forward — with or without you.
2. Scraping the Preview
Weeks before election night, the data scraper will pull all county ballot information into a spreadsheet.
Our Strategy in the Past
In the past, our process for gathering ballot information involved manually scouring county election websites and parsing through PDFs that listed all races and measures. We painstakingly transferred this information, bit by bit, into a simple, structured list for our Bay City News wire service media partners across TV, radio, print, and digital platforms, all of whom conduct their own election coverage.
Most of this work was done manually, with editors and reporters either typing in data or copying and pasting from various sources to create a more user-friendly format. We also compiled local races and measures into a public-facing dashboard on our nonprofit site as a public service.
On election night, we followed a similar workflow, updating results as they came in sporadically, verifying final tallies and updating certified results after polling closed. We typically assigned a journalist to each county to track results and write stories, but the sheer volume of content often overwhelmed editors, who had to fact-check, copy-edit, and ensure quality control before publication.
Identifying the Gaps
Before revamping our workflow, we conducted extensive research. Top-level races were well-covered by services like AP, Ballotpedia, the Knight Election Hub and Perplexity, but local news outlets were largely left to fend for themselves when it came to down-ballot races and measures at the county and city levels.
As a Bay Area-based newsroom, we also examined how regional media organizations, such as KQED, the San Francisco Chronicle, Bay Area News Group, and CalMatters, handled their ballot previews. We found that they faced similar challenges:
- Limited staffing to cover full ballots comprehensively.
- Lack of communication from county election offices about ballot details.
- Different systems because each county used different platforms and APIs, making it nearly impossible to apply a standardized tool or code to pull results uniformly.
Implementing AI
To streamline the ballot collection process, we leveraged data scraping and AI-enhanced automation to extract information directly from county ballot PDFs and websites, moving the data into a structured spreadsheet.
Our goal was to extract only the key fields:
- For candidate races: Name, occupation, and office.
- For ballot measures: Title and description.
However, achieving clean and accurate results required significant trial and error. We had to fine-tune our approach to ensure that essential information wasn’t lost, names with accents weren’t mangled and unnecessary data wasn’t included.
Each of the 13 counties we covered required a unique handling method:
- For counties with ballot PDFs, we downloaded the files and used Claude.ai to extract structured data.
- For counties with HTML-based ballots, we downloaded the HTML and ran it through Claude.ai to grab the information.
- For simple webpages, copying and pasting the entire ballot text into Claude.ai was often the fastest and most effective method.
- For counties offering CSV files, we assessed their usability, some contained too much extraneous data, making them less practical.
Since each county had a different ballot format, we had to tailor our AI prompts for each one, refining them until they consistently produced accurate, structured data.
This AI-assisted approach significantly reduced the manual workload and allowed us to standardize our ballot data across multiple counties despite their differing formats.
3. Election Command Center
Create a comprehensive, all-in-one dashboard that equips our readers with everything they need to navigate the election process across all 13 counties we cover.
Creation
If your newsroom publishes on WordPress, you know the platform has limited built-in customization for creating landing pages, especially without relying on fancy plug-ins.
For our election landing page, we wanted each county’s information to be easily accessible through buttons housed under its respective section. To enhance recognizability and give each county autonomy, we decided to use county emblems as visual markers.
The challenge was structuring the layout for 13 counties in a way that felt balanced and user-friendly. We opted for a four-column grid with five rows. Since 13 isn’t an even fit for this structure, the final row only had one county. To fill the remaining space, we added a graphic to maintain a clean and visually appealing design.
To streamline the process, we first designed a single county block, fine-tuning its size, padding and button placement. Once it was set, we duplicated it across the four columns in the first row. Then, we copied the entire row four more times, ensuring a consistent layout across all counties.
This approach eliminated the need to manually create and adjust each county section. Instead, we simply updated the county name and emblem in each block, drastically cutting down the time required for setup.
Each county block included:
- A county emblem (image)
- Two buttons (split into columns)
- Two additional buttons below
We strongly recommend this approach of fine-tuning a template block and duplicating it because it significantly saves time and is a more effective way of building a WordPress landing page.


Content
When crafting this landing page, we focused on what voters need to make informed decisions. Our goal was to centralize essential election information in one place, ensuring easy access to news, results and resources.
We knew our election stories had to be a key feature, providing voters with relevant news about local races. In addition to our reporting, we embedded live state and national election results using data visualizations from the Associated Press (AP). Through our AP partnership, we had the flexibility to select and display priority races, including state senate contests and other key statewide elections that would be of high interest to our audience.

For national coverage, we integrated AP’s interactive House, Senate, and Presidential race maps, along with ballot measure descriptions for every state. We also linked to national election stories to give readers a broader context beyond local elections. While our primary focus was on local races, having AP’s data relieved us of the burden of tracking and updating high-profile contests, a first for our newsroom.
Beyond election results, we incorporated additional tools to keep voters informed and engaged:
- Essential Voting Resources: Easy-to-find links for voter registration, ballot completion guides, FAQs, and CalMatters’ in-depth voter guide.
- Voting Centers & Dropbox Locations: Our team designed custom maps using Google Iframes, providing clear, county-specific locations for voters to easily find where to cast their ballots.
- Convention Coverage & Past Elections: Comprehensive coverage of both the RNC and DNC conventions, along with quick-access buttons to past election results.
- Activating Democracy Section: A curated list of organizations dedicated to voter registration, political activism and civic education.
- Agenda Watch: A tool that helps readers track local government actions, follow public meetings and receive notifications on issues they care about.
To ensure transparency, we included a bold red disclaimer, informing readers that some of our election data is processed using AI, which may result in slight discrepancies or delays as our editors verify information.
This landing page was designed to be an all-in-one resource, empowering voters with the knowledge they need to navigate the election process confidently.
4. Bringing the Ballot to Life
Generate a custom HTML layout to display ballot information in a user-friendly, searchable election interface.
Tackling the Spreadsheet
Our next challenge was figuring out how to present our ballot information in a way that mirrored the election format, boxing races correctly while also ensuring a seamless and time-efficient way to pull in live results. We had no idea how to accomplish this, let alone how to leverage AI for it. So, we handed this task to our then-intern (now staff member), Ciara Zavala, to experiment and see what she could come up with.
With no clear direction or model, she took a straightforward approach, she asked ChatGPT how to do it.

Her first step was to generate a template in HTML. She provided the bot with five sample races, including fake vote counts, and asked it to format the results so that each race box displayed all candidates alongside their vote totals. The bot delivered exactly that. Encouraged by this, she scaled up, feeding it data for 200 candidates. However, ChatGPT could only process about 50 at a time before prompting her to continue manually. She decided to use Claude.ai, a bot specifically created for generating code, whom we nicknamed Claudia.
We quickly realized this approach of having to repeatedly prompt them to generate an HTML wouldn’t be efficient for the fast-paced demands of election night. We decided to ask Claudia for recommendations on the most effective workflow.
With a working HTML template in place, generating layout, colors and formatting, she then asked ChatGPT how to automate large-scale data entry into the HTML. The bot offered several options, ranging from creating a custom PYTHON program to exploring a plugin-based solution (though we found no records of anyone using that method before).
The best and most practical approach was to structure everything in a spreadsheet. Her solution? Use spreadsheet formulas to code and generate a single, consolidated HTML code for all candidates (typically 200–300 per county) and measures (about 20 per county).


Claudia helped generate a spreadsheet with all ballot information—races, candidate names, and a placeholder column for votes. It also provided key pieces of code to structure the data:
- Race Box Opening – Signals the start of a new race box.
- Race Box Closing – Marks the end of a race box with all its candidates.
- Candidate HTML – Formats candidate information, controlling design elements like color, font, sizing, padding, and alignment.
- Combined HTML – Merges all the code for each candidate in a given race.
In the same spreadsheet, Ciara created an additional sheet with a single code designed to compile all the HTML from the “Combined HTML” column. The resulting code was then placed inside the body of the initial HTML template.
Finally, this fully structured code was inserted into the HTML block on WordPress, allowing us to seamlessly integrate ballot data into our site.



AI Reflection
When it came to utilizing AI, it didn’t handle the automation directly. Instead, AI provided us with coding formulas that we applied to the scraped spreadsheet data. These formulas essentially generated the necessary code, wrapping it in a way that produced an HTML output. This HTML then populated the dashboard on our site. By leveraging AI in this way, we were able to bypass the need to hire a coder, effectively only spending about a day and a half to get the HTML code correct instead of manually coding everything. It took approximately 18 prompts to finalize the correct template code and spreadsheet formulas, followed by 25 additional prompts to troubleshoot and resolve all errors or corrections.
Through this process, we discovered a key lesson: understanding limitations. Initially, we tasked Claude.ai and ChatGPT with generating an HTML from our scraped data. While they could handle small datasets, they struggled with larger ones.
Also we discovered that while AI couldn’t directly access our spreadsheet links, it could analyze screenshots of the data. It was capable of creating a CSS template to style the HTML but couldn’t integrate the generated HTML into templates automatically.
In addition, most of our coding queries directed to Claudia often timed out during lengthy or complex tasks, even with the paid version ($20/month). It was almost as if she needed a break, much like a human who needs rest to recharge. Each time this happened, we had to wait for it to reset, start a new conversation and re-explain the project details from the beginning. This was particularly frustrating because forgetting to mention any aspect could impact Claudia’s responses. Being as specific and thorough as possible was crucial when communicating with her.
Even when we thought we had a final version, there were small glitches that required us to tweak the code and troubleshoot errors such as misspelled names, missing accents or hyphens, duplicated races and random alignment issues. Identifying the source of these problems, whether from the scrapers, spreadsheet formulas or template code, was a tedious process. One wrong comma in the code or formulas could do a lot of damage.
5. Powering Real-Time Election Coverage
Scrape election data from all 13 counties’ registrar sites in real-time and pull the results to our dashboard.
The Challenges of Standardizing Election Data
Despite extensive advance planning and research, we quickly realized that preparation alone wasn’t enough. The lack of standardization across 13 county registrar offices created significant obstacles for newsrooms like ours, an issue that plagues election coverage across the state and country.
While some counties rely on Clarity Elections or Live Voter Turnout, others use custom-built systems, each with its own quirks. Registrars are not obligated to present election results in a uniform format, making it a recurring struggle to gather consistent data each election cycle. In addition, some may use the same format each election cycle, but not always; and, of course, the content itself changes with each slate of candidates and measures.
There are three main reporting systems that are certified by the state. Clarity, the most dynamic dashboard that supports JSON, Verity, which uses HTML and Logic & Accuracy which uses PDF formatting.

The Hurdle of “Zero Ballots”
One of our biggest challenges was obtaining “zero ballots,” to test results that registrar offices generate as templates to display election results once polls close. These were critical for us to review in advance so we could properly configure our data scrapers to pull results with the correct code.
In the weeks leading up to the election, we contacted all 13 county registrar offices by phone, email, and social media to track down these templates. It was a painstaking process, some counties released their zero ballots a week before election day, others waited until the night before, and a few, like Mendocino and Napa, didn’t release theirs at all before polls closed. This left us with little to no time for testing.
Ideally, we should have tested our code across all counties in advance to identify and resolve issues before the deadline. However, we were overly confident that it would function seamlessly in all 13 cases when the results went live.
The Election Night Scraping Breakdown
During our initial data scrape for election previews, we successfully pulled all ballot information and populated our AI-coded spreadsheet. However, the team that handled this initial scrape couldn’t guarantee real-time updates on election night.
As a result, we pivoted to a different data scraping team that attempted to retrieve the same content using a different code. The plan was to automate real-time data collection, feeding results directly into our dashboard templates, eliminating the need for manual data entry.
Each county’s website had different layouts and data formats, making navigation difficult. Some required manually copying the HTML into an AI tool to extract data, but response limits made it hard to get complete results. Refining prompts to pull accurate data took multiple tries.
As a result, election night did not go as planned.
Using an AI tool to track live results proved too complicated. The 13 county websites updated every few minutes in different ways, and the tool couldn’t keep up. One major challenge was ranking changes, some counties reordered candidates and measures based on real-time results, while others did not. This caused rankings to shift unpredictably, making it nearly impossible to maintain consistency.
As for human verification, it involved checking row counts and repeatedly re-prompting the bot to ensure accurate formatting. Issues included duplicate names, incorrect name parsing (due to hyphens, length, initials, or misaligned first and last names), and misplaced candidate listings. These errors were resolved by refining prompts, such as requesting a list of “common mistakes to avoid.”
With more time, the prompts could have been improved for each site, but adjusting them across 13 counties in one night wasn’t feasible with our resources.
The new code kept breaking, failing to populate results correctly. Several issues emerged:
- Frequent site updates – Some counties refreshed data every 10 minutes, others every hour, causing inconsistencies.
- Varying site structures – Some counties required users to open separate tabs for specific races, making it difficult to direct the scraper properly.
- Candidate order changes – In some cases, both names and vote totals changed positions, rather than just the numbers updating in place.
Despite using Claude and ChatGPT for AI tools, our system couldn’t handle the ever-changing layouts of each county’s site.
With more time, we could have fine-tuned our approach, but on election night, with only an hour to make it work, we simply didn’t have enough runway to perfect the automation.
6. Reflection & Recommendations for Improvement
Here are our key takeaways from the project, along with recommendations for newsrooms looking to take on a similar effort.
AI tools and web scraping are powerful tools when used correctly, and their capabilities improved rapidly during this project. Based on these advancements, extracting election data should be easier next time, even if county websites remain unchanged. Although it would definitely be easier to maintain a consistent code if they were to remain unchanged. AI tools have advanced significantly since the final election results were posted in December.
To improve future efforts, data teams should recognize limitations of AI and establish clear formatting early. Practicing data extraction on different county systems before election night is crucial. Engaging with County Registrar offices in advance, asking about update schedules and previewing the election website can also help. Staying up to date with AI tools and prompt engineering is key.
While our results weren’t perfect, better tools emerged just weeks after the election that could have resolved our challenges. New AI-powered scraping tools can now extract and structure data more efficiently. The rise of “AI Agents” from companies like OpenAI and Anthropic enables multi-stage data retrieval and formatting across different sites, making automated election reporting far more feasible.
So what now? Since our experiment didn’t go exactly as planned, we still sought out answers on how small newsrooms like ours can efficiently deliver ballot information to the public while working with registrar offices, the official stewards of election data. Finding a middle ground that works for everyone has long been a challenge.
To help bridge this gap, we spoke with other newsrooms, election software vendors, registrar officials, the Secretary of State’s office, and others involved in election administration. Their insights helped shape our recommendations for improving the workflow of election data reporting, ensuring that information is accessible and reaches the public more effectively.
Option #1 – Software Standardization
Given the challenges faced by small newsrooms in automating real-time election results reporting, a potential solution lies in standardizing the software used by different counties.
Specifically, the software that presents the data on their websites. If all county websites presented election data in a standardized format, our code would run seamlessly without breaking. This would ensure uniformity and streamline the data collection process. However, implementing such a solution is not without its hurdles.
Technical inconsistencies between existing systems, budgetary constraints within county offices, and the already overburdened workload of registrars pose significant challenges. Registrars are often tasked with fulfilling mandates from the Secretary of State’s office with limited resources, leaving little room for additional projects.
The funding structure for registrars’ offices can vary between counties, but they typically do not receive state or federal funding. Instead, their funding comes primarily from the county level. The process usually involves approval from the County Board of Supervisors, who determine the budget for these offices. This reliance on county-level funding can create challenges when registrars’ offices are required to implement new mandates or regulations that come without the necessary financial resources to support them.
In these situations, registrars often have to establish an internal reserve or savings to cover the costs associated with fulfilling unexpected and unfunded mandates. This can put a strain on their existing resources and may impact other areas of their operations.
Furthermore, convincing multiple counties to adopt the same software can be a logistical nightmare. Each county may have its own preferences, existing contracts with software vendors, or concerns about compatibility with their current systems. Overcoming these obstacles would require substantial collaboration, funding and training.
Some counties presented their data as a PDF, others on their county website, and some used a third-party site to display their results. As you can see below, each county presents its data differently. A single code solution would not work since their formats vary.

Government support for registrar funding requires a multi-step process, including lobbying, passing legislation, securing funding, and ultimately implementing the solution.
Despite these complexities, the benefits of a standardized system are clear. For small newsrooms, it would mean timely access to reliable election data, enabling us to provide their audience with accurate and up-to-date information. While also being able to spend more time writing insightful stories rather than manually tabulating election results.
Option #2 – Coding Breakthroughs and AI Evolution
As mentioned earlier, AI has advanced significantly since we completed this project. Several AI tools are now on the market, each with different capabilities and limitations.
For example, when we were troubleshooting with Claudia, we couldn’t provide links to documents or spreadsheets, everything had to be fed as static screenshots. Now, Gemini can analyze information directly from a Google link, which would have made the process much faster and more efficient.
Another challenge we faced was how registrar websites stacked the leading candidate on top, confusing the AI because the data wasn’t in a fixed position. With recent improvements, scraping tools are now smarter and better at tracking the order of candidates, making election data extraction more reliable than before.
Option #3 – A Simple File Format
As we researched this project, we wanted to understand the transfer of election data and whether a simple solution existed to bridge the gap for news organizations.

Typically, once batches of ballots are processed and scanned, the data is compiled into a simple file that feeds the public-facing election website. A filtered version of this file, containing only state and national races, is first sent to the Secretary of State.
We wondered whether a public version of the raw data, stripped of design elements and formatting, could be made available, perhaps as a straightforward spreadsheet of vote counts. Since registrars already send a filtered version of this data (excluding local races) to the state, providing a public file wouldn’t add significant workload. In theory, it could be as simple as a quick upload.
However, registrars expressed significant caution, citing privacy and cybersecurity concerns. If such a file were made public, it would need safeguards to prevent any alteration, editing or manipulation of the data.
Yolo County has a highly dynamic archive system that provides access to all past elections and allows users to download the data in a CSV file. They were the first to spearhead this initiative, advancing their archival data and creating maps that visualize voting territories.

It may be possible to replicate this data retrieval system for other registrar sites. However, there could be roadblocks, as archival data is already certified by the state, and live data undergoes countless updates before certification, which raises additional concerns.
To move forward with this initiative, the same vendor that developed Yolo County’s secure archival data system, Civera, could potentially create a similar solution for reporting live data in a downloadable format. This system would need to be tested in a county willing to pilot the approach, with the hope that more registrars would recognize its benefits. While registrars are only required to follow the Secretary of State’s mandates, if this option is cost-effective, low-maintenance and easy to implement, they may be open to adopting it.
Option #4 – Unique code for each system
While many newsrooms can successfully scrape election data for a single county, a simpler process if they’re familiar with the format and can generate consistent code, scaling up to 13 counties is a far greater challenge.
The Press Democrat in Santa Rosa is one of the few newsrooms that has managed to scrape data for multiple counties, developing custom code for the five counties they cover. Their success lies in fine-tuning their scrapers to accommodate each system’s unique format, allowing them to deliver real-time results. Like our newsroom on election night, they also have to adjust their code at 8 p.m. for counties like Mendocino, which don’t release a zero report.
A potential collaboration with the Press Democrat and gaining insight into the different codes they’ve developed could lay the groundwork for expanding this approach to all 58 counties. However, it wouldn’t be as simple as copy-pasting solutions, The feasibility would ultimately depend on the varying interfaces and reporting systems used across counties.
7. Ciara’s Post Script
This is a first-hand account from the intern, now a part of our staff, who played a crucial role in developing the AI generated dashboard.
As an intern in an industry brimming with talent, I knew I had to make a lasting impression with this project if I wanted a stellar recommendation or even a job. In journalism, being an amazing writer is a given, so I couldn’t rely on that alone. Instead, I leaned into my digital skillset, something that is not always taught in J-school if your specification is not tech related.
Fortunately, as a Dow Jones News Fund fellow, I received in-depth training in digital media and audience engagement, which gave me an edge. All of my internships in the past were digital based and it definitely helped to have that experience.
So when my editor, Kat, approached me with this project, I knew I had to make magic happen. As an intern, my mindset was the same regardless if I had that skill or not, the answer to an ask was always, “Yes, I’ll figure it out.” Although the point of being an intern is to learn from others, times like this are perfect opportunities to prove your value. I knew that if I could solve a problem that had been a persistent headache for the entire team, I wouldn’t just be learning, I’d be showing them that I was someone worth having on board.
My Process
The first step was research: had other newsrooms successfully automated election data reporting? We reached out to a few, and while some had achieved semi-automation, no one had tried to integrate AI into the process. That meant we were entering uncharted territory.
I tested various tools like Infogram, Datawrapper, and Flourish which are fantastic for visualizations and graph data, but not quite suited for election data in its common format. Like mentioned above, I decided to take my desires and roadblocks straight to the chatbot itself
At first, I was skeptical. I’d used chatbots before, but never for something this complex. However, I quickly realized that the more context you provide, the better the responses. Since chats don’t carry over information between sessions, I had to re-explain my project every time the conversation reset. My advice? Write a detailed project description and keep it handy. You’ll save time and avoid missing key details that could affect the bot’s responses. (I learned this the hard way.)
After countless interactions, I developed a working relationship with the bot, specifically Claude.ai, which we affectionately nicknamed Claudia. Claudia wasn’t just a tool, it felt like a collaborator as well. It celebrated when we got the code right and acknowledged my confusion with a kind, almost human-like tone. The experience was so natural that I’d tell my editor, “Claudia and I will get on it,” as if I were working with an actual colleague.
And we did. With every glitch that happened, I would take it to Claudia and we were able to pinpoint the issue with many prompts and screenshots.
It took me about three days of researching different options before I could finalize the design for the dashboard. Then came the real challenge: coding. It took a day and a half to create the code for one county. I remember it vividly: a pounding headache, at least 50 screenshots of the dashboard glitching, and then, finally, Claudia telling me I got the code right. I couldn’t believe it. Me, a non-coder, had built a fully functional dashboard.
Once I had the first county working, replicating the code and creating pages for the remaining counties took about a day and was simple. But what ate up the most time? The glitches. The scraper wasn’t always accurate when pulling data and even the smallest errors like a duplicate race box, a misaligned section, or an incorrect profession could throw everything off. Troubleshooting meant digging through multiple layers: Was the issue in the ballot info? The template code? The spreadsheet itself? Finding the root cause of each glitch was a painstaking process. In total, troubleshooting all the errors stretched across an entire week.
People often compliment me on this work, but the truth is I’m not a coder or a data person. The real MVP here is AI. Using it effectively is less about technical expertise and more about knowing how to communicate with it. Just like explaining a complex idea to a child, you have to phrase things in a way that makes sense for the bot to generate the best responses.
I highly recommend watching prompt engineering tutorials to learn different techniques for getting the most out of AI. It’s also important to familiarize yourself with the specific bot you’re using. Pay attention to those pop-ups that appear, because they often introduce advanced features, like new ways to integrate links or connect data. Understanding both the bot’s capabilities and its limitations is the key to making it work for you.
My Takeaways
This project was more than just a learning experience—it was electrifying. Covering my first election truly felt like the Super Bowl of journalism. Election Day was an all-out sprint, starting at 9 a.m. and pushing straight through to 2 a.m. the next day. With my heart racing, adrenaline pumping and five cups of coffee fueling me, I was completely immersed in the moment and part of something far bigger than myself.
But it wasn’t just the chaos of Election Day that made this experience unforgettable, it was the months of collaboration leading up to it. Our team had been working toward the same goal for weeks, each person playing a crucial role. Like a body’s vital organs—the heart, brain, and lungs, every part has to function seamlessly. If one piece failed, the entire system was at risk.
It was invigorating. Like many graduates, shuffling through applications and experiencing many no’s, seeds of doubts are bound to sprout. But this experience fit me like a glove and solidified my passion for news and public service.
Also, I got promoted to a fulltime job.
8. Resources & Leads
Here’s what we used to put this playbook together and other information to give context to the election process.
Office of Voting Systems Technology Assessment: This office is responsible for the examination, testing, and certification of voting systems for use in California elections. It also oversees the approval of ballot printers and the monitoring of ballot manufacture and distribution.
California Regulations: This page provides access to the current regulations related to elections in California. It covers various topics including voter registration, voting options, and election security.
Office of Election Cybersecurity: This office works to ensure the security of California’s elections against cyber threats and misinformation. It collaborates with various agencies and provides resources to protect the integrity of the voting process.
Voting Modernization Board: Established by Proposition 41, this board is responsible for allocating bond funds to counties for upgrading voting systems. It considers applications and awards funding for the purchase of certified voting equipment.
How To Use Your County’s Voting System: This page explains that the voting system used varies by county and directs voters to information on how to use their specific county’s system. It highlights that each county’s elections office selects its own equipment.
County-by-County Voting Equipment Chart: This chart provides a list of the voting equipment used by each county in California as of January 2020. It allows users to see the specific systems employed in different areas.
Proposed Election Regulations: This section of the website provides information regarding election regulations that are currently being proposed. It allows interested parties to review potential changes to election laws and procedures.
Risk Limiting Audits Regulations: These regulations, adopted in 2020, outline the procedures for conducting risk-limiting audits in California. Risk-limiting audits are a post-election process using statistical methods to confirm election results.
History of Voting Systems in California: This document from 1999 provides a historical overview of voting systems used in California. It traces the evolution of election technology in the state.
Modernizing California’s Voting Technology: It discusses the history of voting systems in California, the transition to new equipment following federal and state legislation in 2002, and the ongoing efforts to modernize voting technology, particularly in Los Angeles County.
Contact Info: This is a spreadsheet of different organizations, registrars and people that we interviewed or reached out to for this project and playbook.
