Automating GitHub Repo Maintenance with AI Agents
By Microsoft Reactor
Summary
Topics Covered
- Automate GitHub Issue Triage with AI Agents
- Human-in-the-Loop is Crucial for AI Issue Triage
- AI Agents Excel at Processing Large Contexts Quickly
- Automate Repo Maintenance with AI-Assigned Issues
- Thorough CI Enables Deeper Delegation to AI Agents
Full Transcript
Hey everyone, thank you for joining us for our next session, automating GitHub repo maintenance with AI agents. My name
is Anna. I'll be your producer for this session. I'm an event planner for
session. I'm an event planner for Reactor joining you from Redmond, Washington.
Before we start, I do have some quick housekeeping. Please take a moment to
housekeeping. Please take a moment to read our code of conduct.
We seek to provide a respectful environment for both our audience and presenters.
While we absolutely encourage engagement in the chat, we ask that you please be mindful of your commentary, remain professional, and on topic.
Keep an eye on that chat. We'll be
dropping helpful links and checking for questions for our presenter to answer.
Our session is being recorded. It will
be available to view on demand right here on the Reactor channel. With that,
I'd love to turn it over to our speaker for today.
Hello. Hello everyone. Welcome to the live stream today about automating GitHub repo maintenance with AI agents.
I see we've got people coming from all over. So, we've got Argentina, Uruguay,
over. So, we've got Argentina, Uruguay, Pakistan England England Canada Anna from Washington, San Diego. Uh so
it's great to have people tuning in from all over. Uh if you haven't shared yet,
all over. Uh if you haven't shared yet, share in the chat where you're from and you know maybe what you're interested in
uh in learning today um in this topic.
So what are we going to talk about today is how I deal with my two problems, right? So problem number one is that
right? So problem number one is that I've got too many issues in my GitHub repos. And problem number two is I've
repos. And problem number two is I've got too many GitHub repos. So I've been trying to uh relieve my burden by using
AI agents and uh sometimes that means using GitHub copilot agent mode. Uh
sometimes that means actually programming an agent using an agent framework like Langchain or Langraph or Pedantic AI. And there's just a range of
Pedantic AI. And there's just a range of solutions. So I'm going to walk you
solutions. So I'm going to walk you through my solutions. I'm going to try to live demo all of them to get you a feel for for how they actually work. And
hopefully this will give you some ideas for how uh you can maintain your own code repositories with the help of these agents.
So let's talk about problem number one.
too many issues.
This is a good problem to have. When you
have too many issues in a repo, it means that it's a popular repo. It means
people are using it. So, it's a good thing. It's a good thing. I'm glad we
thing. It's a good thing. I'm glad we have so many issues. It means people are using it. People are letting us know,
using it. People are letting us know, you know, what's going wrong. So, um the biggest iss repo this is happening on is the AER search OpenAI demo repo. If
you've heard me talk before, you've probably heard about this repo or maybe you've deployed it yourself. Um but it is a very popular repo. It's, you know, it's got 5,000 forks. It's been deployed
uh thousands of times and currently it's got 510 open issues. And quite a few of these issues are are old issues. Like we
can actually see I do have stalebot enabled on this repo just to just marking um marking issues as stale when they've you know been inactive for a
period of time. So you can see uh that like 56 of them are even marked as stale. Uh so there's there's quite a few
stale. Uh so there's there's quite a few issues. Quite a few of them haven't any
issues. Quite a few of them haven't any activity in a long time. So in theory we should be able to close a lot of these issues, right? Because there I don't
issues, right? Because there I don't actually think there's 500 things wrong with the repo. At least not 500 things that I can I can fix.
Uh so how can I more efficiently close close these issues so that we can have a
cleaner issue tracker and so that my manager is is happier.
So when I'm deciding whether to close an issue, this is the thought process I go through, right? Like you know I can
through, right? Like you know I can definitely close an issue if we fix the bug, right? either with a code change or
bug, right? either with a code change or maybe it's just a documentation change or maybe the bug was in some part of code that's you know that's been deleted since then right so it's just an
obsolete bug uh I can maybe close an if issue if it got marked as stale right so some if the stalebot marked it as stale that means it hasn't had activity in a certain amount of time sometimes it
means an issue can be closed but not all the time because sometimes an issue hasn't had activity for a while just because there's nothing more to say but it's still a valid request So that's why I don't have the stalebot
automatically close issues because personally I hate when I file an issue on a repo and then stalebot marks as a stale and then comes in and then closes it even though it's still a valid issue,
right? So I don't want to just
right? So I don't want to just immediately mark something as closed just because it hasn't had activity for a while. Um it you know that's just one
a while. Um it you know that's just one signal that maybe it can be closed but we don't know for sure unless we really look at it and see like is this still a valid open issue right? Uh you know if
there's some issue that it was very environment specific we've never replicated like well there's not much we can do about it. Uh and then some some things are feature requests that are just too far outside a scope of a repo.
That's hard to like reason about what's really outside the scope of a repo because in theory you know we can do anything. Uh but we have to have
anything. Uh but we have to have boundaries in life maybe. So you know sometimes we might decide to close and say something's out of scope and and there's some issues that we can't close
because there's still active bugs or there's still valid feature requests that are in the scope of the repo. So,
it's important to think through how a human would close an issue in order to figure out how we could use an AI agent to close the issue because we want to
replicate that same decision-making process with the AI agent, right?
So, what does the human need to know to make a decision about an issue? Uh, so
the, you know, we need to know what does the code currently look like? How does
the code work? What are the features in it? What are the bugs? uh we need to
it? What are the bugs? uh we need to know what the documentation describes whether something's in the documentation. Uh it's helpful to look
documentation. Uh it's helpful to look at the pull request to see if any pull requests specifically fixed an issue.
It's also helpful to look at other issues to see if an issue is actually a duplicate. It's been discussed before.
duplicate. It's been discussed before.
There's already a resolution of those other issues. Uh so sometimes that
other issues. Uh so sometimes that information lives inside my head but if it's not in my head then I can you know look in the actual repo look on GitHub
and get all the information that I need in order to close an issue.
So now that we've thought through how a human would triage issues, how could we automate issue triaging with AI agents?
So I'm going to show you three different approaches that range from a low code approach to a very high code approach uh that all have their pros and cons.
Right? So we'll start with the low code using VS code plus GitHub copilot plus GitHub MCP and then we'll go from there.
Now the important thing is for all of these approaches I am trying to keep a human in the loop. meaning that I don't want to just have an agent go off and close issues on its own without without
checking with me because I do not yet trust agents to that extent, right? I
want to always uh see what the agent has decided and then say yes, okay, good, close it or no, I disagree, keep it open, right? So, that I think is a
open, right? So, that I think is a really important thing when we are working with agents these days is to keep a human in the loop. Um, you know, agents can get us a lot of the way
there, but not a 100% of the way. And,
uh, we want a way of supervising and confirming whether we agree with an agent behavior.
Okay. So, let's start with uh, GitHub copilot. All right. So, what I'm going
copilot. All right. So, what I'm going to do is actually I've got the repo open here. So, this is actually, you know,
here. So, this is actually, you know, the Azure search openi demo. This is the rag repo. This is the one that has all
rag repo. This is the one that has all of the open issues. So I have it open in VS Code. So what I can do in here is
VS Code. So what I can do in here is that I've got a custom prompt and the the way I did this was if you go to your command pallet. Let me try and make this
command pallet. Let me try and make this bigger. Uh you can say new prompt file
bigger. Uh you can say new prompt file and uh and then you can name your prompt and then you can uh define your prompt
there. Right? So like find and issued to
there. Right? So like find and issued to close do not actually close it. So this
is making a custom prompt file is a helpful thing to do if there's some sort of thing that you want to do repeatedly, right? Some sort of process you do
right? Some sort of process you do repeatedly. You don't want to have to
repeatedly. You don't want to have to keep copying and pasting the prompt, you know, from some notepad. Instead, you
can store the prompt in your repo and then reference that prompt from the from the agent mode here. Uh, so I already have it. Uh, so I'm going to close this
have it. Uh, so I'm going to close this one because I've already got my prompt in here. Let me just delete that one.
in here. Let me just delete that one.
So here is my issue triagger prompt. Uh,
so this says that it works in agent mode and it says, "Okay, you're an issue triager. Do not close them unless I tell
triager. Do not close them unless I tell you to do so." This is how I keep the the human in the loop. And I tell it like, "Okay, first you're going to um find stale issues. You're going to
examine the issue. You're going to search the docs. You're going to decide if it's obsolete. And then you're going to output this information.
And then I've got a bunch of other tips here based off of my experience using it. So now the way I use it is I'm just
it. So now the way I use it is I'm just going to say slash. And then um I can just select this prompt from the slash menu. So when you do slash in agent mode
menu. So when you do slash in agent mode in GitHub copilot, it will show you all the possible prompts. So, these could be prompts in your repo. These can be prompts from uh MCP extensions, from
built-in tools, from VS Code extensions.
Tons of different prompts. So, I'm going to say uh find one uh issue to close from Azure search
open AI demo.
All right.
So, uh, it is it knows that it needs to follow the instructions here and find an issue to close. Now, we need to check to see do I have GitHub MCP server enabled.
I do. So, this is really important. I've
got the GitHub MCP server enabled. This
server can do all kinds of things. It's
got like hundreds of tools. So, it can search code, it can search issues, it can do all kinds of things. So this
prompt assumes that it has this GitHub MCP server and that's what it's going to use to find issues, right? So you can see that first it lists the issues. So
it it does this query to the MCP server.
Uh and then it starts searching through the repo and this is actually pretty fast for it to do because it's using just the built-in search of VS Code which is quite quite fast. Uh so it
found an issue and uh it said okay all right here's a summary of the issue.
Here's um the closing closing suggested closing reply and and then I can say like whether I
want to make any changes or or not. Um,
now, so what I typically do is I read through the suggested reply and I, you know, often check the issue too to make sure I have full context and I can go back and forth and say, oh, let's like
change the reply. So, let me actually show you one um that I did earlier. Uh,
just because, you know, I I want to, as I saying, I like to I like to actually like, you know, give my own input. Uh,
so I want to show you one I I looked at earlier. Uh so you can see um you know
earlier. Uh so you can see um you know what I did here when I had more time to actually look at look through the reply.
Uh so here it has suggested closing reply. This is for a different issue.
reply. This is for a different issue.
And I said you know I didn't think that the reply was quite thorough enough. So
I said oh okay mention you know mention this additional fact as well. So then it said okay here's the new suggested closing reply. And I said okay good that
closing reply. And I said okay good that looks good. Please close the issue with
looks good. Please close the issue with that reply. and then it used the GitHub
that reply. and then it used the GitHub MCP server to add that comet to the issue on my behalf and then closed the issue. Right? So it used two different
issue. Right? So it used two different tools in order to add the comment and then close the issue. So let me show you what the actual uh issue looked like
yesterday that it closed. Right? So you
can see that it added the comment uh on uh on my behalf because it's you know it's the GitHub MCP server is authenticated as me. So when it adds these comments, it's it's happening in me. And that's particularly why I want
me. And that's particularly why I want to be in the loop because if it's coming from me, like I, you know, I I'm associating my name with these comments.
Uh so I, you know, I want to feel good about them. Uh and uh and yeah, so there
about them. Uh and uh and yeah, so there so it did all that work for me. Uh so I see a question, um is there a global prompt directory you can pull from if you want them modular across repos?
Yeah, that's a good question. I believe
so. when you do new prompt file. Yeah.
So what you're going to do is when you say new prompt file instead of doing github/prompts which is in your repo you're going to say user data folder. So
that's you can see on the Mac it's going to um the live you know the library. So
that's a global folder. So yeah if you are using the same prompt uh and this would actually be a good prompt um to go across multiple repos as long as um as long as it didn't have the repo
hardcoded in it. uh then you could store it in your user data folder and then you could always reference it there.
Great question.
Okay, so that's the technique of using a prompt. Uh so this works pretty well
prompt. Uh so this works pretty well actually. The the the tricky thing here
actually. The the the tricky thing here is that we had to make sure we had the MCP server enabled. Um if I hadn't had the MCP server enabled, it would have been a lot harder for it to actually do
anything. it wouldn't have been able to
anything. it wouldn't have been able to search the issues and post the comments and all that stuff.
Approach number one. Okay. Now,
and the other drawback. So, one, I had to had the GitHub MCP server enabled.
Two is that it could have used any uh tools from that MCP server. And I
generally don't like to just have powerful tools enabled all the time, right? Because in order to use that
right? Because in order to use that prompt, I had to have that server enabled. And I had all these tools
enabled. And I had all these tools enabled. And this is a lot of tools and
enabled. And this is a lot of tools and it makes me uncomfortable to have all these MCP server tools enabled all the time. So I'd rather have more control
time. So I'd rather have more control over exactly what tools this process is going to use.
So what I'm going to use next is a custom chat mode. And this is slightly different from a prompt, right? So for a
custom chat mode similarly you could do um add let's see custom mode. Yeah. So
you would do new mode file, right? And
so once again, a custom chat mode can be in your repo. In this case, it goes in.
Github/ chat modes or it could be in your user data folder, right? And um and so for the you know the question about where this is stored, this is a global
uh this is a global folder uh that would be referenced in all of my windows, right? So if I stick it in my user data
right? So if I stick it in my user data folder, I will have this chat mode available for all of my VS Code Insider windows.
Uh so in this case, you know, I would put it in here and I do already have it um in here. I have it checked into the repo because I use it uh I use it a lot.
And so this one has a bit more information at the top. So we've got a description. We have a preferred model.
description. We have a preferred model.
Uh I really like GBD5. It's my favorite.
I tried out claude 4.5 today and it is not my favorite but everybody has their own preferences so you don't have to say the model but um you know since I've tested this with GB5 and I know it works
well in there uh I put the model GB5 and then what I really like is that we can specify the tools and you can configure the tools. So uh we can see here that
the tools. So uh we can see here that I've told it that it can use a bunch of different tools. So, this is a
different tools. So, this is a combination of built-in tools that it's allowed to use. And I actually restricted it quite a bit because in in this mode, it shouldn't actually be
doing a, you know, much stuff to the repo itself, right? Um, it looks like I did let it edit files, but I actually don't even think it would need to do that, right? Like the main thing it
that, right? Like the main thing it needs for the repo in this mode is tools to search the repo, right? So, that's
the most important things uh to enable.
And then I enabled ones from the GitHub server, right? So here I've got add
server, right? So here I've got add issue comment, assign copilot to issue, that could be good. Um get commit, get discussion, right? Lots of gets. See
discussion, right? Lots of gets. See
these are all gets. So I'm very comfortable with giving it access to these readonly operations where it's fetching data um and listing data, but I
don't give it access to most of the write operations. Right? The only write
write operations. Right? The only write operation that it gets is update issue.
So you can see the full the full list here and it's still quite a long list. It's
got a lot of capabilities and really you could probably get rid of quite a few of them. I don't know if it needs security
them. I don't know if it needs security advisors and secret scanning alerts. Um
but they're pretty safe because they're readon, right? And then we've got a
readon, right? And then we've got a really similar um a really similar description here. So the big difference
description here. So the big difference is really the tool configuration, right?
U but that to me is pretty important. I
like that difference. So, to use a custom mode, you go to your mode picker here, and you can see the built-in ones
right now are agent, ask, and edit. Then
we see our custom modes. Uh, so I've got a fixer mode, and then I've got the trio mode. So, that's the one I want here.
mode. So, that's the one I want here.
And I'll just say find one stale issue from Azure Search OpenAI demo to potentially close. I don't have to be
potentially close. I don't have to be this verbose. I could just say find an
this verbose. I could just say find an issue and it would find it.
Yeah, I see some commentary about favorite models. Everyone has their
favorite models. Everyone has their favorite models. I think 4.5 is a great
favorite models. I think 4.5 is a great coding model. I just don't like that it
coding model. I just don't like that it says absolutely and perfect all the time. That's it. I just I don't like
time. That's it. I just I don't like being complimented and uh I like I like that GBD5 does not compliment me. That's
why I use it.
Uh, okay. So, here you can see this looks really similar, right? Like it's
it's using the GitHub MCP server. Um,
and we are using the to-dos feature, which is uh it's a it's a nice feature that you can enable in your modes if you want the model to come up with a to-do list and and mark it off as it's going
through throwing going through its task.
And uh and then it's doing lots of searching searching here.
So, it's going to end up being a pretty pretty similar um experience.
And here we go. We've got the candidate stale issue. Uh this one's actually
stale issue. Uh this one's actually different from the other one.
And we've got a suggested closing reply, right?
And and it says a lot of things.
Okay, so that looks pretty reasonable.
Once again, I will go back to a previous one just so I can spend more time looking at that one. Uh, so here's the one I did yesterday.
Uh, and so it, you know, it said, uh, it gave a reply. Once again, I had a suggested, I told it to modify that reply. So, you can see I very often will
reply. So, you can see I very often will work back and forth with the agent and it says, "Okay, here's the new suggested closing reply." And I said, "All right,
closing reply." And I said, "All right, looks good. Close the issue." and it
looks good. Close the issue." and it closes the issue and it marks all its to-dos as done. So, there we go. Um,
really similar to using the prompt, but the difference is the control of the tools.
All right. So that works really well actually like it's it's a pretty good way of going through issues and um you know having it as saying like the advantage of using an agent here is that
the agent can really quickly you know do searches look at other issues and digest all that information. It can read faster than we can right like the the thing about agents is that they can they can
read a lot of context much faster than we can and I consider myself a pretty fast reader, right? Um, so they're they're really good at reading, getting all the information, synthesizing it,
and um, you know, suggesting something and and I like that it it gives me a suggested reply because I personally have a hard time like closing issues just from a social anxiety perspective.
Like I don't I I feel bad closing issues, like I feel guilty closing issues, but if I have a suggested reply that is, you know, well thought out and helpful, then I'm more likely to be able
to close an issue, right? It gives me the ability to overcome my anxiety and guilt in order to actually post the issue.
Uh I see there's a question at what level of simplicity am I comfortable going hands off for the agent? Uh I'm
only comfortable with that for things that are like really um like low low risk. So for example, I do have an agent
risk. So for example, I do have an agent that um monitors my my uh LinkedIn requests, right? Uh so this is doesn't
requests, right? Uh so this is doesn't have to do with any repos, right? So I
have an agent that decides whether to accept uh LinkedIn invitations. And this
is like really really low risk because the worst that happens is that I forget like that I don't connect with somebody on LinkedIn or I collect connect with somebody that I, you know, maybe didn't want to connect with. Um but that's
really not a big deal, right? It's just
a social media acceptance. So, um, for example, I'll just run the agent here, uh, so you can see it. Um, and and that's that's the only kind of thing I'm like really comfortable with having it
be, uh, hands off. And even this one, the agent looks at it and it I gave it the ability to say that it can't decide.
So, even in this case, I wanted the agent to have the ability to say, "Sorry, I can't decide on this one. It's
not clear enough." Um, so that then in that case, I just manually triage whatever's left. uh but a lot of times
whatever's left. uh but a lot of times it is comfortable deciding right so you can see I my hands are like I'm not I'm not touching the screen it's using playright uh in order to automatically
go through those invitations and accept them and it's using an LLM oh look and somebody's in the stream right now hi Stephen [Music]
is using the LLM in order to look at them and and see like oh is this person technical if they're technical then it accepts them um if it's a recruiter then it doesn't accept them that's that's the criteria and the LM's very good at
deciding that. So, this is the kind of
deciding that. So, this is the kind of thing where I'm comfortable being fairly handsoff while still giving it an escape hook if it if it really can't decide,
right? And um yeah, so really really low
right? And um yeah, so really really low risk uh with a GitHub repo like uh you know that affects other people, right?
This only affects me, but whether if I close an issue that affects other people and potentially means that their issue is not going to get addressed. So there
I I need a higher level of confidence and a higher level of um of oversight.
Okay, this will keep accepting in the background. So I'll close it. Um but uh
background. So I'll close it. Um but uh yeah, that's a great question.
All right, so uh so the chat mode works pretty well. Um you know, the
pretty well. Um you know, the disadvantage there is that uh you know, you can I I'm very much in the loop. So
I couldn't just like how would I go and do this for a bunch of issues, right?
Because these requests are all linear.
They're all synchronous, right? Like
it's I have to go and type in and say this is, you know, do another one, do another one, do another one, right? What
if I wanted to bulk process like, you know, dozens of issues at the same time.
Uh so in that case, I need a way of firing off a bunch of things at once and having a bit more control.
So that's when I started writing similar workflows in Python programs. Uh so in these Python programs, the idea here is that I'm going to write an agent that is
going to find an issue, hand it off to the agent, use um the GitHub MCP server, and then come up with a proposed action.
Uh so let me first show you like the simpler simplest version of this uh over here. Okay. All right. So this is a
here. Okay. All right. So this is a Python program using uh langchain v1 and you can see it is programmed using the
github mcp server. In this case I'm passing in a personal access token and this token only has permissions to a single repo. So that's a nice thing you
single repo. So that's a nice thing you can do. Like another way of really
can do. Like another way of really restricting the amount of damage an MCP server can do is that you can configure an MCP server like the GitHub MCP server. I configured it with tokens that
server. I configured it with tokens that have very specific fine grain controls, right? So like this token only has
right? So like this token only has access to the this particular repo and can only do certain things in those repos. So you can have a huge amount of
repos. So you can have a huge amount of control with these fine grain tokens and in order to you know reduce the amount of potential destructive action that an MCP server can take and then I do
another thing I get the tools and I actually filter down the tools in code as well. So uh in this case I really
as well. So uh in this case I really filtered it down and said hey these are the only four tools you are allowed to use. Right? So I'm being very
use. Right? So I'm being very restrictive trying to rain it down. Uh,
and then I say, okay, find an open issue that can be closed. And it's using a really similar pro prompt. So, we're
going to see this prompt uh quite a bit.
Um, it's got some differences because now it has to use the GitHub MCP server for searching the code. So, I gave it a bunch of tips for how to do that because it's kind of hard to do it well. Um, but
otherwise, it is a very similar prompt.
Uh, so let me go ahead and run this one.
Let's see. Let's see if we can get it to um find a find an issue for us.
And in this case, this one is only going to come up with a proposal. It's not
going to actually follow through on the proposal because we have to get a bit fancier in order to to follow through.
Um so this is basically an intermediary step before we get to my final solution.
Okay. Uh so here we can see the first thing it's going to do is find an issue.
So it found some issues. Um and now it's starting to search the code and and this is one of the disadvantages of using the agent is that every time it needs to search the code it does have to
do a network request right to the MCP server uh versus in VS Code where it could just search locally which is faster. All right. So here we got issue
faster. All right. So here we got issue proposal. Um this is one we were
proposal. Um this is one we were actually we saw in VS Code as well. So,
they both found uh found the same issue.
It says we should close it. And here's
the suggested reply message, right? So,
uh so now I would I still need to actually do the reply. So, this so this only gets us partway here. Um but I wanted to show you the you know this
code before we go to the more sophisticated code here.
Uh let me just check on the questions here. Does it um does the agent have a
here. Does it um does the agent have a certain level of judgment when choosing to adhere to instruction? Is it rigid in the rule following? That's really going to depend on which model you're using. I
do tend to use GPD5 these days when I can. Let me see if that's what I'm using
can. Let me see if that's what I'm using here. Uh don't steal that token. Um
here. Uh don't steal that token. Um
here, this is using GP41 mini, I think, because I didn't have enough quota for GBD5. I need to get more quota for GB5,
GBD5. I need to get more quota for GB5, that's why. Um but uh generally I prefer
that's why. Um but uh generally I prefer GB5 for instruction following and uh for not doing too much work, right? Um but
it's still hard. Like in in this prompt here, you can see I have a bunch of things that say you have a budget of six tool calls. I've got it like three
tool calls. I've got it like three different times in here. Do not make more than six tool calls. Uh because the problem was it was doing way too many tool calls and I didn't I didn't have the the um the quota for it. I was even
going over GitHub quotas because GitHub search API itself has quotas and uh so I was trying to keep it under um but I had to put I had to put this all over the
prompt right so you know it it's like all LLMs right they they roughly follow the instructions um but there's definitely variance and uh
sometimes you got to do some prompt engineering um okay and then oh I also saw There's a question about the if you want the see the LinkedIn agent, you can uh I'll just
put that in the the chat. We'll we'll
share that in the chat so you can check it out. We all everything I'm showing is
it out. We all everything I'm showing is um you know is public code. So we'll
share all all the comment all the code we'll share in the in the chat and in the description of the video. All right.
So so this is the start of an agent that uses the GitHub MCP server. But we need to take we need to bring the human in the loop, right? So how are we going to bring a human in the loop? Uh I could do
like a command line interface where I could go back and forth and I have to like re-implement chat. Um instead what I decided to do
is uh is use this thing called agent inbox.
So what's going to happen is that the agent is going to come up with a proposal. It's going to send it to uh
proposal. It's going to send it to uh agent inbox which is an inbox of actions from agents for you to approve and then
I'll approve it in agent inbox and then decide um you know whether it's good. Uh
so here's here's the flow, right? So I
find the stale issue using the GitHub API. Then I research the issue using a
API. Then I research the issue using a lang chain agent. Then I propose an action using a LLM and then I send it to agent inbox and I
the human look at what's in my agent inbox and decide whether it's good or bad or if I want to edit it and then the it goes back to the code and the code will
close the issue.
So let's see what that actually looks like. Um
like. Um let me go. So this is in another another repo. This is a little fancier.
repo. This is a little fancier.
Uh, stop looking at my EMV. All right,
I'll clear out all reset all my keys after this. Um, okay. So, here I've got
after this. Um, okay. So, here I've got my more sophisticated agent running running in here. This is a lang graph
agent. Uh, so let me show you what it
agent. Uh, so let me show you what it looks like in the langraph studio.
Right. So my langraph agent goes through different phases. So let me go ahead and
different phases. So let me go ahead and start off a new a new thread for this lane graph agent. So it's already selected an issue and now it's
researching the issue and I can look at the traces here and I can watch this. So this is using lang which is an observability platform and it's a nice way of looking at the
traces. So I can see like this call you
traces. So I can see like this call you know going out to GBD5 mini that's what I'm using in this case right and then I can see all the tools it's using right so it's searching the code it's fetching
the file it's searching the code it's fetching a file right it's searching issues so it's just doing all these tool requests and adding all this information
to to the thread in order to do a really really thorough search right so this context is window is actually getting quite big. You can
see the number of tokens here. It says
uh it's currently at like 18,000. Now
it's at like 24,000 tokens. So the
context window isn't getting quite big because it's doing all these tool calls and appending it all to the context. Um
we could we could reduce that size by doing kind of summarization along the way. I haven't done that yet because you
way. I haven't done that yet because you know it seems to work well enough. Um
but uh you know it does get a bit long.
So we can see all the different calls here to search code and fetch files and search issues and fetch files. And I do actually have a middleware that limits
the number of tool calls because sometimes it just does uh too many too many tool calls. So I am max I have a max tool calls of eight right now. And
after eight I actually say okay now it's time now you're done. You've done too many tool calls. I'm cutting you off.
you need to start summarizing it because if you let an agent like if you give an agent access to lots of tools like it can just go it can go hard, right? Like
it can go forever. It just keeps on calling more tools and you know increasing that contact size and at a certain point you know it's it's time to
finish up and make a decision, right?
Uh so let's see. Okay, so it finally did all those tool calls and uh and then proposed an action.
So once it proposes an action, that goes to my agent inbox. And so you can see actually I've I've run this twice and it found the same issue both times. And so
it sent requests to my agent inbox to say, hey, the this this is action from an agent that that needs your approval, right? So I can look at this and say
right? So I can look at this and say okay um all right so this is the issue and the agent wants to close the issue.
This is the rationale for why it wants to close it. It wants to add a label. It
wants to remove a label. Uh it doesn't want to assign a co-pilot and it wants to post a comment. And here's the comment. And what I really like about
comment. And what I really like about agent inbox is that I can edit all of this right here. So uh if I don't like the comment, I can go in here and edit it. Right. So let me see if I can
it. Right. So let me see if I can quickly uh let's see. And then there established fixes. Let me just add in
established fixes. Let me just add in 313 in there. Right. Um
some wheels. Okay. If you're still um get rid of that. Um okay. So I can look at all this, see if I agree with
everything. Edit it. And And that looks
everything. Edit it. And And that looks good. So what I'm going to do now is say
good. So what I'm going to do now is say submit.
So that goes back to the the agent and the agent then applies the decision and finalizes it. So now um we should
finalizes it. So now um we should actually be able to see see that decision um you know see it actually on
the issue. So let me open up the issue
the issue. So let me open up the issue and here you can see the comment and you can see it's closed right. So this is the way that I had I I have a human in
the loop, right? Like I can kick off this agent wherever. I could kick off lots of these agents. Once they got to the point where a decision was made, those get sent to the agent inbox. I can
look at the agent inbox, say, "Hey, what still needs action?" I can see what it is proposing. I can edit it and submit
is proposing. I can edit it and submit it. And then it would go through to the
it. And then it would go through to the issue. So, I really like this this way
issue. So, I really like this this way of this kind of user interface for having a human in the loop. Uh because
you've imagined eventually you could have multiple agents that are doing things that uh you know some of them might need your approval and you could just look at your agent inbox and say
like oh okay what what um actions need approval today.
All right. So that is that this approach with with the agent inbox and you know it uses some link chain code to do this.
Uh you know once again let me share the uh the repo for for that one in the chat so you can check it out. Uh
let me also share the one from earlier.
Uh so the simpler one is in this agent frameworks demos repo and then also the the chat mode was in this Azure search openi demo. So everything I've shown
openi demo. So everything I've shown it's all open source and you can you can check it out yourself.
All right. Okay. So that was problem number one issues.
Uh just going to pause to see if there are any questions on that before I move on to problem number two.
See?
[Music] All right. Looking good. Looking good.
All right. Looking good. Looking good.
We can also tackle questions at the end if you think of any. So now let's talk about the next problem is too many repos.
Uh so this is a list of all the repos inside the Azure samples or where I am the primary maintainer. Uh that is quite a few repos and the reason for all those
repos is that there's so many different ways that you can build things in Python. And you know some Python
Python. And you know some Python developers you know use different frameworks right like for web apps you could use Django Flask fast API quart and uh for AI agents you could use link
chain you could use pyantic AI you could use semantic kernel right there's so many different ways that you can build and I'm trying to show people that it's possible to build all of those things
and that leads to lots of combinations and permutations and then I end up making repos for all these different combinations and permutations so That means a lot of repositories and that is
only the repositories under Azure samples or I also have 365 repositories under my personal GitHub account. So
quite a few repos and these repos need constant maintenance right. Uh the Python
maintenance right. Uh the Python packages and the npm packages always need upgrading. you know there's like
need upgrading. you know there's like security vulnerabilities we got to upgrade the packages I do have dependabot enabled in all the repos and dependabot in theory tries to upgrade your packages but a lot of times it's
not smart enough to figure out exactly how to do the right um upgrade step right it doesn't do pip compile correctly it doesn't understand uv that sort of thing so um dependency updates
is a huge pain point for me and then sometimes you actually have to change the code when a dependency changes too right like it actually requires code changes test changes all of that uh APIs
change frameworks ca change uh we just we have new tools right so we're generally moving from pip to UV from black to rough right and we might want to make that change across all the repos
we have new breast practices all the time so there's lots of ways in which these repos need to be maintained lots of updates that need to be made to them
and the interesting thing is that many of the repos need really similar upgradeds, right? Like, you know, they
upgradeds, right? Like, you know, they all I want to move them all to using, you know, UV, right? And that's a very similar upgrade across all of them, but there's it's not I can't just go and
like edit the exact same file across every repo. Instead, there's like slight
every repo. Instead, there's like slight differences in every repo, right? Um, so
I need something that has the flexibility of making these similar upgrades and it can't just be, you know, a regular expression, right? Um, it'd be great. Like I thought originally I was
great. Like I thought originally I was going to use like regular expressions to solve this and I wasn't really looking forward to that. Um but the thing is we need more flexibility than that because they there's a lot of similarities in these repos, a lot of similarity in the
maintenance that needs to be done but they're not all exactly the same.
So how do I maintain these repos uh as a human? Well, I'll you know if I have a
human? Well, I'll you know if I have a change that I think is a good change, I try out that change in one repository.
Generally I try the change in the repository that the has the highest test coverage right because some of my repos have very good test coverage you know like 100% coverage and different kinds
of tests and other of my repos not. Um
so I generally test out a change in one that has the highest test coverage and then if it's good then I'll manually make that change across every single repository and then I get bored and give
up and decide there has to be a better way. Right? So that's where we're going
way. Right? So that's where we're going to bring in agents.
So now GitHub has an agent that you can assign to issues in your repo. That's
basically a background agent, an asynchronous agent that will work uh on an issue uh you know inside like a containerized environment on your repo
to try and solve the issue. And that can be really a really helpful way to make these really menial fixes so that you don't have to like check out the repo yourself. to say like, "Hey, co-pilot, I
yourself. to say like, "Hey, co-pilot, I need you to make this upgrade. Here are
the steps to do it. You know, go and go and make this change and verify it, right?" And then co-pilot will send a PR
right?" And then co-pilot will send a PR and then you can verify the PR, right?
So, here's one I did last week. Uh, let
me open that up and show you, right? So here uh you know I said okay you need to change
this YAML file um and uh you know find it update a version number um add this constraint and remove a section right and then I assigned it to copilot so
here I assigned to co-pilot and then co-pilot put its eyes on it and then co-pilot made a PR and it does it within like 20 minutes or so depending
on how long it takes to work on it and so then we can go and look at the PR and you know I go and review it and I say okay yeah that looks good it bumped
the version it bumped that and it removed that section. Um so it's a pretty simple change but it's still more complex than just a regular expression right because it it did have to bump
some numbers here and so it's helpful to have an actual LLM on this versus you know just uh my attempt at reaxing and you can see you know it writes it up and
you can go back and forth on it. Uh, so
I did actually tell it to um to go back and forth on something mostly just to make sure it could so I can show you that it can reply, right? But you can go back and forth. If you find that it's
gone like completely far a field, like the its PR is really not what you wanted and you have to do a lot of back and forth, I would actually suggest that you close the PR, you close the original
issue and you make a brand new issue.
Right? Generally, it is easier once you realize that you didn't give enough instructions in your original issue.
It's easier to create a new issue that has enough instructions and reassign that to copilot than to go back and forth with C-pilot on the original, right? Especially if you're going to go
right? Especially if you're going to go and apply the same issue to other repos because you want to figure out what issue description is really going to, you know, work the best so that you ideally you don't have to iterate,
right? the best is if it can get it in
right? the best is if it can get it in the right immediately, you know, in its its first shot. Uh, you know, so I'll I'll review it. I'll iterate and then I
will merge it.
Uh, so when you do get the PRs, one tricky thing about the PRs is that humans uh as a human has to approve the workflows to run. So that I find
actually fairly annoying and I found an issue about it. Um because I want it to automatically run the workflows so that it can see if they failed and and uh you know address the the issues. So a human
has to approve the workflows to run. So
if you have any CI and then a human also has to mark it as ready for review. Um
hopefully that'll change in the future so that there's less buttons that we have to click. Um but saying if the PR is poor I think it's best to write a new issue. Also, another thing you might
issue. Also, another thing you might want to do is improve the agents.mmd in
your repo. That's a file that you put at the root of your repo that tells agents how to work with your repo. And
generally, you want to have that if you're going to be using GitHub coding copilot agent. It's going to be much
copilot agent. It's going to be much more successful, right? So, uh let's see. So, I can show you um for this
see. So, I can show you um for this repo, right? Here's the agents.md
repo, right? Here's the agents.md
instructions for coding agents. Here's
the overall code layout. Here's how you do, you know, add new things, how add new settings, here's how you upgrade dependencies, here's how you check Python stuff. So, it's just the most
Python stuff. So, it's just the most important stuff for an agent that it needs to know and generally will make it m background agents and also VS code
agents much more successful at working on a repo.
All right, so that's how you can get copilot to address your issues. But
remember, I have lots and lots of repos, right? So how do I make it so that I can
right? So how do I make it so that I can file the same issue across many repos, right? So potentially I have got like
right? So potentially I have got like 500 repos that could need a fix, right?
500 repos, you know, I'm actively maintaining props, right? So how do I file the issues only in the repos that
need that fix? Uh, so of course I wrote a program and the program knows how to go through all my repos, look to see
which repos need a particular fix and then assign an issue in that case, right?
And uh and that way I basically automate the creation of issues across all the repos that need it. Uh so I use a YAML file in order to say like oh okay you know you're looking for any repo that
has Azure.yaml AML and it has this
has Azure.yaml AML and it has this pattern in the file and if so you're going to make an issue with this title and here's the issue description and then I run that uh run a check
across all of my repos and then it'll just make issues in each of the repos that are you know met that requirement and then copilot uh you know it assigns
auto assigns them to copilot and then copilot works on it. Okay, so let me actually demo that so you can see um what it looks like. So, I was working on
a new um a new issue that I want to assign to a bunch of repos uh today. So, here's the
issue. I was just talking about the
issue. I was just talking about the importance of agents.mmd. So, most of my repos yet have agents.mmd. So, what I want to do is assign co-pilot to write
an agents.md for all the repos that
an agents.md for all the repos that don't have it. And so, I've got this I this is my test issue, right? This is
the one I manually wrote and manually assigned and it's got a description of what I want in it based off of what has worked for me well in other ones and I
assigned it to C-pilot. Right? So then I can go and see, okay, here's um here's what it came up with and um and I gave
it some feedback here and uh it looked at my feedback and um actually did it look at my feedback.
All right, I'm going to go ahead and re re uh tell it to address my feedback.
Okay, address my feedback. Okay, here we go. And then here you can see that I
go. And then here you can see that I have to approve the workflows to run and mark it as ready for review.
Uh so now it's running and also copilot is going to address my feedback. So this
is one issue. Now I want to automate this across all my issues. So, what I'm going to do is um go to
my next repo that knows how to assign issues. This is my repo maintainer agent
issues. This is my repo maintainer agent repo.
And I'm going to say uh okay, make a new code check.
Let me make this bigger. that searches
for repos that do not have an agents.m
MD at the root. If the file is missing, then make a new issue based off this issue. And then I'm going to tell it to
issue. And then I'm going to tell it to look at this issue, right? So I'm
telling it to make a new automated code check based off my test issue.
And that way it's going to write the YAML for me. Uh, I generally don't like writing YAML myself. So, if I can have, you know, if I can have Copilot write
the YAML for me, that's great. Um, so it is going to go ahead and um figure out how to to write this YAML.
Uh, so it's going to look at that that test issue, the one that I like, the one that seems to work fairly well. And now
it's going to create the new YAML. So,
we'll see a new YAML pop up here soon.
do.
Okay. Uh, all right. So, here's the new YAML that it wrote. So, it said, all right, it's looking to make to see if agents.m MD is missing. If so, it's
agents.m MD is missing. If so, it's going to add this issue. And here's the issue description.
Uh, assign it to the co-pilot agent.
Okay, so that looks good. Uh, it's even running the test to make sure the tests look good. Okay, I'm going to keep that.
look good. Okay, I'm going to keep that.
All right, tests look good. Okay. And
summarizing.
All right. Looks good. All right. Now,
I'm going to tell it run a dry run. So
then what I do is I do a dry run with the check just to see what issues it would make. So it's going to run a dry
would make. So it's going to run a dry room run.
[Music] Oh, I see a question. Is it possible to assign things to a different AI like assigned to claude code? Uh, that's a good question. I haven't tried assigning
good question. I haven't tried assigning it. If you can assign it to cloud code
it. If you can assign it to cloud code from your GitHub user interface, then you should be able to assign it in the API. I'll show you the code um that
API. I'll show you the code um that actually does it. Assign uh all right.
So you just need to figure out what the assigne ID is, right? Um so if there is an assigne ID, then you would just um use that assigney ID instead. Uh so I've
never tried assigning to another bot. Um
but if it is possible in the GitHub UI, then it should be possible in the API, right? because I'm just doing an API
right? because I'm just doing an API call that says, "Hey, assign the issue to this user." You could also assign it to non robots as well. Um
uh but you know, in the future we'll have a mix of uh of uh robots and humans. Maybe mostly robots. Shouldn't
humans. Maybe mostly robots. Shouldn't
call them robots. They're really agents.
Okay, so this is running and we can see that it said in the dry run it would um it would create quite a few issues and that makes sense because most of my
repos do not have agents.mmd yet.
And now I can run it in non-dryro mode.
So you can actually we can actually watch it make the issues. So how many is it going to make? It's going to make 33 issues. So here's the thing is that once
issues. So here's the thing is that once you start automating this issue assignment to co-pilot agent, uh you're going to end up doing more code reviews.
Now you can assign your code reviews to the coding agent as well and have it check its own work. Um but you should always be, you know, reviewing code yourself as well. H so it it does there
is an increased burden now of code reviews, right? like I'm doing um
reviews, right? like I'm doing um spending more time on the code reviews but that's why I like to have a good test issue to make sure that I like like the result um you know the result that
it makes and before I go forward with creating the issue across many many repos right uh so you can see it's making lots of issues I'll just open up a random one
right and uh you can see it made the issue and I assigned it to co-pilot right so it's going to make 33 issues like that across my repos and pretty soon my inbox
is going to be filled with requests from co-pilot to review all these new agents.md files.
agents.md files.
All right. So that is that is that approach and um you know the point is there we've got these background agents and uh you can think about ways that you
can offload some of the more trivial work uh the more verifiable work right especially like the best thing is if you've got really thorough CI on your
repos like unit test integration test all that stuff like the better your CI the more you can delegate to coding agents right if you do not have tests on your repo though it is very hard to
delegate to coding agents because you have no idea if they've done it right.
But if you've got really thorough CI checking everything like the linting, the formatting, the you know the the actual functionality then you can delegate more and more and more because you have like the confidence that things
are working right. So my hope is that people like are motivated in order to spend more time on the um on adding verification uh so that they can delegate more to
agents.
All right, here's some parting thoughts here. Uh, I hope that you've got some
here. Uh, I hope that you've got some ideas now about how you can start to, you know, maintain code using AI agents.
There's a range of ways you can do it.
And I showed some of them, you know, some actual Python programs, but really you can do so much using, you know, GitHub copilot, MCP servers, and it can
get you so far of the way. And in some ways it's it's it's you know better there because you don't have to maintain the code right you're just maintaining a prompt um and you've got that chat interface where you can like iterate go
back and forth so I would say you know start off with that more low code approach and only start writing programs if you know uh you've got something that
you're going to use quite a lot like with a a a large degree of repetition um or if you just feel like it right part of why I write Python programs is because it's fun and I want to see what it's like if I actually, you know, turn
something into a program. I like
programming even if the agents do a lot of it for me now. Um, but uh, yeah, I do I do think that you can use the GitHub Copilot and more low code tools with
great success and you don't necessarily have to jump to writing full Python programs. And the great thing is that even if your approach doesn't work, you will learn more along the way about what
works well, what doesn't work well. Um,
you know, and what uh what approaches you might be able to use in the future.
And of course, if you find something that does work, do share them out with the world. Uh, you know, share your
the world. Uh, you know, share your prompts, share your modes. Uh, if you do have a cool mode, we do have this repo, the awesome uh co-pilot repo, where we have lots of prompts and uh and modes
that people share. That's a nice place to share um modes modes as well.
Uh so yeah, do share things out and so that other people can learn.
And this is a plug. We do have a series coming up if you want to learn more about using Python with generative AI.
We've got a whole nine-part series coming up in October starting next week.
And we're going to be covering a ton of topics. Uh, I'll be doing it in English.
topics. Uh, I'll be doing it in English.
My colleague Gwen will be doing it in Spanish. So, if any of these interest
Spanish. So, if any of these interest you, please do sign up for the series.
Uh, because it should be a really fun series and we're going to be covering so many topics. All right. So, now I can go
many topics. All right. So, now I can go ahead and tackle the questions. I did
see some more questions um come come in. So, Paulo asks, can you define what model the coding agent uses?
Um, yeah, that's a good question. I
don't think we have that flexibility yet. So I know um I don't know if you
yet. So I know um I don't know if you saw Okay, so I'll tell you what I do know is that the CLI defaults to So we just dropped the CLI um which I think
uses uh kind of the same um maybe the same agent or logic as the coding agent and that one defaults to claude but
there is an environment variable that you could override to set it to GBD5. I
don't think you can set it to any arbitrary model, but I believe at least GBD5 is supported and and maybe some other ones as well. Um, so the question
is, can we do the same uh with um with the background agent, the coding agent? Uh
the hard thing is actually that everything is kind of called the same.
So here we go. Okay, about the coding agent. Um
agent. Um this is where we would see uh auto model selection.
Let's see. The model is determined.
Okay. So it says you cannot change the model used by the co-pilot coding agent.
It is claude sonnet 4. All right. So I
guess they want a high degree of control of it right now. Um you know I imagine they would have more flexibility uh in the future.
Uh but currently uh you cannot change it and it is cla 4.
Good question.
Um all right, let's see.
Uh okay, questions, questions. All
right.
Headless mode call. Okay. Angel Angel's
saying about headless mode. Maybe you're
referring to playright where I'm doing uh when you say headless.
Uh that might be a reference to playright. Um in this case I am doing a
playright. Um in this case I am doing a headful call. Let me open this up. Um
headful call. Let me open this up. Um
via a head.
Okay. Headless.
All right. So I did headless equals false. Right. So this was an automation
false. Right. So this was an automation with playright. Um playright you don't
with playright. Um playright you don't have to watch the browser. You can set it to either headless true or headless false. So in theory I could actually run
false. So in theory I could actually run this in any sort of environment. You
don't have to actually watch the browser pop up. I just think it's fun. Uh but
pop up. I just think it's fun. Uh but
you can you know you can run it entirely headless as well. That's my guess as to what you were talking about when you said headless mode. But if you meant something else, let me know. Uh Mario
says how to avoid overengineering. Yeah,
that's a good question. I definitely
suffer from overengineering.
Um so you know that's why I'm you know after I showed you all those things I do encourage I feel like the best like for issue triaging right like here I think
that um the chat mode is probably actually a pretty good u middle solution here where you know I've got the control of the tools um but I don't you know I
don't have this full Python program. Um,
so this way the nice thing about this is that it's, you know, it's easy for other people to modify and to use that are in the repo. If I've got other maintainers
the repo. If I've got other maintainers and they don't have to worry about, you know, getting the Python program running, right? Because if I'm using
running, right? Because if I'm using this with other maintainers, then really I should deploy that Python program uh to, you know, to Azure, add on a layer
of off, deploy the agent inbox, add on a layer of O, maybe add on a virtual network, right? like that's like getting
network, right? like that's like getting really fancy, right? So, the interesting thing is there's quite a lot we can do with custom chat modes and there's a lot you can do also with just running these agents locally, right? If you're the
only one that needs the automation, you don't have to go through all this effort of deploying things to the cloud, right?
You can just be running these things locally. Um, so I've I've resisted
locally. Um, so I've I've resisted actually deploying any things uh to the cloud which has made it much easier to maintain them. Uh so yeah, so I would
maintain them. Uh so yeah, so I would say like uh you know try try out the the the simpler things first and see if you're
really really running uh you know running against a wall because also now that there's a GitHub co-pilot CLI I think it's going to hopefully support chat modes at some point and so then
maybe we'd be able to just queue off a bunch of requests too and I don't know we'll we'll see we'll see. So it's good to start simple.
Um, any comment about the new agent CLI with Claude Code and Codeex?
Um, I I actually haven't used Claude Code. I'm not really a big CLI user. Um,
Code. I'm not really a big CLI user. Um,
I I kind of like the UI here of of VS Code because I can just look through all my code really easily. So, I haven't really embraced the the the um CLIs. I
did try out the new GitHub Copilot CLI and it did work. Um, but I do I don't see it as being my primary way of
working when I'm in a repo because I do really like as soon as like maybe for a small repo, but as soon as a repo has lots and lots of files, I really like having the file explorer here that I can
just like look through. But I know many people like the CLI approach. So, you
know, great if you like the CLI approach. um you know you can use the
approach. um you know you can use the copod CLI you can use other CLIs like from cloud code and you know whatever works for you I would say the important
thing is that with um is to add something like the agents.mmd file uh it is supported by many many agents it is not yet supported by cloud code for
cloud code you have to do claude.md
um so you could do a like a a sim link or a copy there but agents.mmd
is supported by quite a few agents So, uh, I would really recommend adding an agents.mmd
agents.mmd so that all of the agents who use all the ones that support agents.mmd can be more successful.
Um, all right. Let's see if there's any other questions. Okay.
other questions. Okay.
All right. So, I think we've covered a lot of the questions. Uh so please do come to our Python AI series. Uh I also
have weekly office hours in the AI discord. Uh let's go ahead and share the
discord. Uh let's go ahead and share the link for that so that people know about that. Where is Oh, here we go. We'll
that. Where is Oh, here we go. We'll
share that link as well. So there's lots of opportunities to ask more questions and we'll also share the link for the
slides. So, if any of you want to
slides. So, if any of you want to reference the slides, you can do that as well. See a couple of last questions.
well. See a couple of last questions.
Any information on an official Clippy return?
I know everybody wants Clippy coming back, you know. No, I I cannot I cannot reveal.
I don't really know. I think that's beyond my pay grade. I assume like Satcha is in charge of that and he's the only one that can decide if Satcha if if Clippy will return. Um, but uh I I know
there's a there's a a big fan base for Clippy.
I remember when I was a kid.
All right, thank you everyone. Thanks so
much for all the great engagement, all the comments and questions in the chat.
Hope this is helpful for you and hope to see you at future live streams and the office hours. Bye everyone.
office hours. Bye everyone.
Thank you all for joining and thanks again to our speakers.
We're always looking to improve our sessions and your experience. If you
have any feedback for us, we would love to hear what you have to say. You can
find our survey link on the screen or in the chat and we'll see you at the next one.
Heat. Heat.
[Music]
Loading video analysis...