LongCut logo

Data analysis with AI

By Anthropic

Summary

Topics Covered

  • Test AI on Past Data First
  • Refine Prompts Iteratively
  • Note AI's Capability Gaps
  • Retain Accountability Always
  • Non-Experts Use AI for Data Basics

Full Transcript

In our last lesson, we dealt with data privacy and security. What you

absolutely need to protect and how to do it. So, now let's talk about the

it. So, now let's talk about the question that's probably stopped you from using AI for data analysis in the first place. How can I trust the

first place. How can I trust the results? Today's lesson is about the

results? Today's lesson is about the delegation diligence loop. Specifically,

building confidence in AI's analytical capabilities for your specific work by systematically testing it against data you already understand. By doing this, you can better understand how AI will

support your specific circumstances.

The process starts with delegation.

Here's how this works. First, identify a specific analytical task you do regularly that you want to delegate to AI. Find past data where you already

AI. Find past data where you already completed the analysis and then work with AI to reproduce what you did, evaluating what works and what doesn't.

Refine your approach and test again. If

AI can match your known results, you know how to use it and trust it for similar future tasks. And if not, you've learned that this task is something you shouldn't delegate. So, let me show you

shouldn't delegate. So, let me show you what this looks like in practice, and then we'll talk through what to do if you're not that data savvy to begin with.

Meet Rio, the program director at Valley Veterans Services. Every quarter, he

Veterans Services. Every quarter, he analyzes program attendance alongside employment outcomes, calculating participation rates, tracking monthly changes, and determining whether

attendance correlates with job placement success. This analysis consistently

success. This analysis consistently takes him hours. Considering delegation,

Rio knows he wants to continue using the results of this analysis to improve his program. He wants to interpret the

program. He wants to interpret the results himself, but he could do without the data cleaning and formula mayhem he usually finds himself in to do the actual analysis. So, in order to test

actual analysis. So, in order to test whether AI is appropriate in this scenario, he's going to evaluate it using last quarter's data. He knows

exactly what this data showed after he analyzed it without AI, and he has the raw messy data from before he started.

This is his test case. Rio uploads the data and starts to work with AI using description and discernment to perform his analysis. Only each time the AI

his analysis. Only each time the AI responds, Rio is going to check the results against what he knows to be true and jot down potential gaps in AI's reasoning. Sometimes additional

reasoning. Sometimes additional description helps AI get the outcome he's looking for. In these cases, Rio knows he has to include that kind of information for future data analysis

tasks. Other times, Rio might find

tasks. Other times, Rio might find legitimate capability gaps. This is the delegation diligence loop in action.

Rio's diligence to evaluate the model's capabilities can change what he chooses to delegate to AI in the future. His

first attempt might look like, I'm sharing attendance data and employment outcome data from our job training program last quarter. Please analyze the participation patterns across the three months and graph the correlations

between attendance levels and employment success. I'm particularly interested in

success. I'm particularly interested in understanding whether consistent attendance predicts better job placement outcomes.

AI responds with a summary, but rather than assuming this is fact, Rio checks this against his records and notes what's good and what's not. AI correctly

identified the correlation between program attendance and job placement, but it missed a critical insight around the combined housing assistance and job placement program. So Rio refineses his

placement program. So Rio refineses his description asking AI to try again but pay special attention to the program type.

This time AI catches its mistake. So Rio

notes that for future quarters he'll need to specifically request the AI to consider the program type when performing its analysis.

Then he tests something harder. Can you

also look at this based on when participants enrolled? AI responds as

participants enrolled? AI responds as Rio observes that despite not knowing the enrollment data, AI could help extract it. He makes a note to cross

extract it. He makes a note to cross reference these results later on.

By going through this process, Rio has systematically validated what AI can and can't do for his quarterly reporting.

He's learned that with the right description, AI can accurately reproduce the analysis he used to do manually. But

he's also identified clear limitations and areas for follow-up. AI needs

enrollment dates in the data to do cohort analysis. Otherwise, it'll try to

cohort analysis. Otherwise, it'll try to infer them, which he doesn't want. And

most importantly, Rio now has a tested approach that he can confidently use with this quarter's data and clear notes about what information he needs to include and what context he still needs to add himself. When Rio uses this

validated approach with new data, his diligence continues. He'll check whether

diligence continues. He'll check whether numbers make sense based on what he knows about his programs. He'll take accountability for the final report and he'll be transparent about AI's role if

asked. But now he's working from

asked. But now he's working from validated confidence, not guesswork. So

here's the framework. Identify a

specific analytical task that you want to delegate. Be precise about what you

to delegate. Be precise about what you need. Then find past data where you

need. Then find past data where you already completed that analysis. You

need the right answers to evaluate whether AI can arrive at them.

Work with AI to reproduce your past analysis and systematically evaluate the results. What did AI produce? How did it

results. What did AI produce? How did it approach the task? How did it communicate findings? Identify gaps,

communicate findings? Identify gaps, refine your delegation, and then test again. If you can validate that AI

again. If you can validate that AI produces correct results, you've built an approach that you can confidently use on new data. But if you can't get there after several refinements, you've learned that this isn't a task you

should delegate. So, this is all great,

should delegate. So, this is all great, but what if you're not very comfortable with the data to begin with and wouldn't be able to spot those process gaps yourself? AI can also be a useful tool

yourself? AI can also be a useful tool to brainstorm and implement solutions you might not have thought of on your own. Because AI models are uniquely good

own. Because AI models are uniquely good at coding, they can help with things like writing Excel formulas, reformatting messy data, and more.

In these cases, you can simply bring your question or idea to AI and specifically ask for help understanding what a solution could look like, just like how you would work with a data analyst on your team. As you work with

AI, just keep asking for clarifications and explanations so that you can follow the process and understand the final output. Just remember, validation builds

output. Just remember, validation builds confidence, but it doesn't eliminate responsibility. you're still accountable

responsibility. you're still accountable for checking that these results make sense and being transparent about AI's role in your analysis process. This

testing works for any analytical task you're considering. Donor analysis,

you're considering. Donor analysis, budget forecasting, survey synthesis, outcome tracking. Test first, validate

outcome tracking. Test first, validate what works, then apply with more confidence, or learn what you shouldn't delegate at all. In our next lesson, we'll look at workflow augmentation and

how to apply these same principles when AI handles routine tasks on your behalf.

Loading...

Loading video analysis...