LongCut logo

Automate Your Meeting Summaries & Actions with AI Agent + n8n (Microsoft Teams)

By nocodecreative

Summary

## Key takeaways - **Automate Teams Transcripts to Actionable Summaries**: This workflow automatically processes Microsoft Teams meeting transcripts to generate participant-aware summaries, strategic takeaways, decisions, and action items, which are then emailed to users with a link to an interactive web app. [00:04], [00:26] - **Multi-User n8n Workflows with Token Management**: To enable multi-user workflows in n8n, a custom authentication application is used to manage user tokens securely, allowing automations to run on behalf of multiple users without storing individual credentials. [01:15], [01:28] - **Handling Transcripts from Third-Party Schedulers**: When transcripts are not available via the standard endpoint (e.g., for meetings scheduled via third-party tools), the workflow searches user SharePoint folders using the meeting subject and date to locate and download the transcription. [18:10], [18:55] - **Leveraging GPT-4.1 for Structured Insights**: The workflow utilizes GPT-4.1 with a low sampling temperature (0.3) and a JSON response format to reliably extract structured data, including summaries, takeaways, and action items, from meeting transcripts. [22:44], [23:33] - **Production-Ready Automation with Postgres**: All meeting data, including raw transcripts, JSON insights, and HTML output, is persisted in a PostgreSQL database, enabling retry logic for transcript processing and serving as a data source for a web application. [12:53], [13:19] - **Multi-Tenant Support for Enterprise Use**: The automation supports multi-tenant environments by utilizing client credentials for application permissions, allowing it to fetch core records from different tenants by making separate API calls for each. [35:40], [36:05]

Topics Covered

  • Automate your entire meeting follow-up process.
  • How do you scale single-user automations for an enterprise?
  • Use a database to build stateful, resilient automations.
  • Find hidden transcripts with this undocumented Microsoft API trick.
  • Why prompting for JSON is better than structured output.

Full Transcript

Today we're going to walk through my

latest N8N automation template. The

workflow automatically grabs Microsoft

Teams transcriptions and analyzes them.

Once the analysis is complete, it emails

you a summary and the email contains a

link where you can open up a web app.

It's actually a fully interactive web

app where you can draft follow-up emails

using AI based on the meeting context

and save them directly to your draft

folder within Hlet ready to send to

participants of the meeting. The

analysis is detailed and participant

aware. It includes a full summary,

strategic takeaways, decisions made

during the meeting, and clear actions

grouped by owner. You can also push out

to third party tools. So, in this case,

I'm using MEM.AI, which is an AI powered

note-taking tool, and it allows you to

chat with your meeting context and any

other notes that you have in there. So,

you're never going to forget what was

discussed. You always have a a reference

point that you can quickly refer back to

in the future or search based on memory.

So, really good tool. Recommend you

check it out. So after we walk through

the templates, we're going to tackle a

common challenge within N8 automation,

and that is the fact that workflows only

run per user. As you'll know, you need

to sign in to N8 and create credentials

within the platform, which means that

each workflow is tied to the credentials

within the workflow. We're going to flip

that on its head and I'm going to show

you how to transform a single user

workflow into a multi-user workflow,

something that can then be used across

your entire organization. So to do that

we are going to deploy a solution that

we've developed internally here at no

code creative. It lets users securely

authenticate using Microsoft OF 2 and it

takes care of token management behind

the scenes and this allows your

automations to run on behalf of multiple

users without the hassle and stress of

having to manually worry about

individual credentials or multiple

workflows within NAN or whatever

automation platform you're using. It's

going to be a deep dive, but stick with

me. You'll learn how to build

enterprisegrade automations for real

world business needs. Both versions of

the template are going to be available.

So, the single-user version and the

multi-user version that we are going to

create today. Head over to the NA10

template marketplace for the single-user

version. The multi-user version is

available exclusively via my blog. Check

it out in the description below. And the

authentication platform is also

available as well. Be sure to check out

the blog for a full description.

There'll be plenty of information in

there in terms of how to set this up. We

will be using an external database

Postgress in this case which does

require some self-hosting and setup. So

the blog will contain all the

information that you need on how to get

that running. Now a pro tip would be to

use superbase as it exposes a connection

string that allows you to connect to the

underlaying Postgress database and in

any you can use a Postgress node to

connect directly to Superbase. So if

you're not comfortable deploying your

own Postgress database that is an

alternative solution but without further

ado let's dive in. So the first thing

that you're going to need for this

template to work is a Microsoft Azour

app registration. Um so go to

portal.zour.com. Sign up for Microsoft

Azour. It is completely free. Uh there's

no cost to creating an app registration.

Once you've logged into your Microsoft

Azour, you should see an app

registrations button. If you don't, you

can search for it here. Uh simply come

to new registration. Give it a nice

recognizable name. Choose whether you

want it to be applicable to a single

tenant, so one organization or

multi-tenant. Um, and click on register.

You want to take note of your client ID

and you want to take note of your

directory, your tenant ID as well. The

next thing you're going to want to do is

come down to authentication. NM will

provide you with a redirect URL. So,

this is where you enter your redirect

URL.

You need to make sure that access tokens

and ID tokens are checked and it will

give you another opportunity here to uh

select single or multi-tenants if

necessary. Then you need to come down to

API permissions and you're going to want

to add all of these permissions. Now I

will leave uh a list of these

permissions on the blog post. Uh to add

each permission all you need to do is

click on add permissions and then you

can choose Microsoft Graph and you can

either choose delegated or application

permissions. So for example delegated

permissions uh the first one is

calendars you can see calendars read all

you have to do is ticket it and then

click on add permissions. Uh same for

application permission all you do is

switch to application permission uh for

example core records and there you go

just ticket it and press add. Uh similar

for shareepoint you would just uh look

for shareepoint from this list delegated

and application permissions and you

would go through the exact same process.

Uh once you've added your permissions,

head over to uh certificates and secrets

uh and you want to create a new client

secret. Give it a name, give it an

expiration, and once you've done that,

click on add. It will give you the

value. Enter into NAN. You'll only get

to view this once and then it will

disappear. So make sure you take note of

it. And that's how you set up an app

registration. So once you've set up the

app registration, you are good to go.

Okay. So we'll start with an overview of

the existing template.

As you can see, there are three triggers

on this template. There's a form trigger

here, which isn't really needed, but if

you did want to manually download a VTT

file after a recording, this form would

allow you to manually process that that

file for whatever reason, you may have

historic meetings that you want to go

and get the file for and put it through

this process. And the form trigger will

allow you to do it manually,

effectively. So it looks for VTT file

which after a recording you can download

directly in teams or from Microsoft

stream and it just asks you for some

some inputs and then we have the code

nodes just to format those outputs so

that it's correct for the rest of the

workflow. But the primary focus of the

workflow is to run on schedule. So we

are using the schedule trigger which

runs every 5 minutes. And of course we

have a manual trigger here as well for

testing. Next we have a merge node and

that's just to unify the inputs. And

then we have a get profile node. So this

is just a HTTP request. We are doing a

call to the Microsoft graph endpoint to

get the profile of the current

user. Okay. So the way that we're doing

this is it's basically well we're

authenticated using O2 API credentials

built in 2010 and I have connected that

to my personal account. So it only works

in the context of me right now. The goal

later on is we're going to adapt this

workflow to work with multiple users. So

one workflow multiple users and we're

going to use an application which I've

just finished developing which is

available which will allow workflows to

work across multiple users. So we'll go

into that in a bit more detail when we

update the workflow. But for right now,

we have attached this to a single

credential and this credential is

connected to the Microsoft Azour app

registration that we created

earlier. So once we've got this set up,

basically allows us to get the user

that's tied to these credentials. And

what we're looking for here is the ID of

the user. So we're going to use that

later on within the workflow. This is

our our starting point. Then we have an

if

node which

is checking to see if the data has been

submitted by the form or not. So if it

has been submitted by the form, it's

essentially going to bypass the rest of

the workflow and go straight into our

meeting analysis over here, which we'll

come back to in a bit. If it hasn't been

submitted by the form, we are going to

set a time range here. Essentially what

we're going to do is we're going to get

all records but we're going to restrict

the time frame on which we're returning

those records

for. So within this set time range node

we have an

expression. Expression is essentially

gives us a date in the past and that is

derived by the minutes ago variable.

This expression so you can change this

to suit your needs. 420 in minutes is

about 7 hours. are always looking over a

working day almost. But if you wanted to

go further back in time, you can extend

that as far as you like if you want to

process all records. Okay. Once we've

set the time, we're going to call

another endpoint within the Microsoft

Graph API. And as you can see, there's a

a filter expression here which is making

sure that we only get records that are

greater than or equal to the time that

we created in the previous node.

Now words on this one the get core

records

endpoint is only available with

application

permissions. So there's two when you

create your Azour app registration

there's two different types of

permission. We have delegated permission

which is based on the user that's

signing in or we have application

permissions which based on the user so

they can look at everyone's records

within your Microsoft tenant. The core

records endpoint is only available for

application permissions. So therefore,

the credentials that we set up are

slightly different. They're still off

two credentials, but we're using

application permissions within the

Microsoft Azour app registration. If you

want to get core records across multiple

tenants, you will have to have separate

HTTP request each tenant, which I'm

going to do. So, we're going to update

this to to work across multiple tenants

when we adapt the workflow. So, I'll

come back to that in a bit. But the

template by default is set up to work

with one tenant. Most people are

probably only working with one tenant.

Once we've got all of our previous core

records, we are going to have a look and

make sure that we only get the relevant

ones. So we have a code node here and we

have a small description here of what

this code node does. So as you can see,

it processes a list of Microsoft Teams

meeting core records and filters and

sorts past meetings. So extracting all

of the core records from the input data.

It filters out things that have not

ended yet. Anything that has ended less

than 5 minutes ago, it filters out

because sometimes there's a lag between

uh the transcription becoming available

after the meeting ends. So this just

gives us an opportunity for for the

system to catch up before we try to get

the transcription. We're also filtering

out meetings that were not organized by

the user. And that is because we cannot

get transcriptions that are not tied to

the authenticated user. So we're

filtering anything out that doesn't

match the ID from the get profile node.

We're taking the the ID there and we're

doing some filtering put in the code.

Yep. So then we're going to sort the

meetings by end time. It's going to

return a nice clean output for us to

work with. So in this case, it's found

two meetings as you can see and it's

outputs all the details that we need to

process them in order to identify if the

meeting could potentially have a

transcript. I should say that just

because there's a call record doesn't

necessarily mean it has a transcript. So

we need to do some filtering to make

sure that what we're going to process

does have a transcript. And to do that,

the first thing that I identified is

that it needs to have an online meeting

URL. So if it's an online meeting, it

will have a join

URL. So we're going to make sure that we

have a join meeting URL. It's going to

filter out anything that doesn't. Using

that join URL, we're then going to use

the online meetings endpoint to grab

more details about that meeting. So as

you see, we're kind of filtering

meetings that match that join URL and

then we're returning the information

available from the Microsoft Graph

endpoint about that meeting. Now, of

course, there could be multiple meetings

at this point. So we're going to split

them out. Then we're going to just do a

check for duplicates. So this is a

really good node within the NAN if you

haven't used it. Well worth using. It's

very handy. And this is just really a

safety. We don't want to process

multiple records with the same meeting

ID. So, we're going to remove any

duplicates before continuing. You can

remove items that have been previously

processed in previous executions, but

that is not what we want this workflow.

So, the next thing that we're going to

do is grab the ID of the

call. So, if we have a look here from

our filter node, there is a ID

field. So, I'm just going to grab that

ID and we're going to append it to the

final object. So we have a call ID here.

So now we have got order details, the

meeting and the call ID as well. The

call

ID is coming from the call records

node. The call ID wasn't included in the

get meeting details node. So what so

essentially what we're doing is we're

pending the call ID to the meeting

object. Then we have all the data that

we need in one place. The next thing

that we are doing here is we're saving

all of the data to a Postgress database.

The reason for doing this is twofold

really. So the reason we're saving to

Postgress is twofold really. Essentially

we have a web application that is going

to be served via email and people can

access any time. So we need to have all

the information saved somewhere where it

can be accessed at any time. So we have

a Postgress database where we're saving

everything. And this also allows us to

have some logic in regards to checking

if something has been processed or not.

Now you would have seen from the

schedule trigger earlier that this

workflow runs every 5 minutes and

sometimes there is a delay with

transcriptions becoming available. So

essentially we have some retry logic in

here. Runs every 5 minutes. If it

doesn't find a transcription in the

third run, then it's going to stop

processing it any further and it's going

to assume that there's no transcription

available. And this is because the core

records only become available when a

call finishes or we're filtering out to

make sure that we only get records of a

finished call. So if there's not a

transcription available after 15 minutes

after the call has finished, then we're

going to assume that there's no

transcription for that meeting and

therefore we're going to stop processing

it any further. Because if you remember,

we have a 7-hour window which we set

back over here with our time range. So

we don't want to keep trying to process

the same object for that 7 hours. We're

just going to try a few times and then

stop. And I will include in the template

some documentation which will allow you

to actually set up this SQL statement

that will allow you to create the

database. But if we have a look at the

node, you can see that we are using

execute query and we are inserting all

this information into the

database and we have on

conflict to make sure things are updated

and then we have an attempt count down

here which is going to increment by one

every time and we're also setting the

time of processing as well. So I'm not

going to go over the SQL statement right

now but there will be some more detailed

notes and statements that you can

actually create this from your own

postquest database. So because we are

using a SQL query to check if

requested there will be occasions where

it just doesn't return anything because

there's nothing left process and if

that's the case we want to filter it

out. So if there's no items to process

then stop. So the output from the

Postgress node will have a success

object and if that success object exists

we're going to stop at this point and

then we're using a merge note here. So

after we make sure that there's an item

to process what we actually want to do

is go and grab the item and then we want

to match that

item against what's coming out of the

Postgress node. So if the Postgress node

finds something that needs to be

processed, it can output it here and

it's going to output it here as well.

And obviously it's then going to keep

it. And then we're going to go into the

code nodes. And all we're doing here is

just grabbing the data that has came out

of the bind IDs node. Okay, which is

what I showed you earlier where we are

grabbing all of the data for the meeting

and appending the call ID at the end.

Okay, so that's just grabbing all of

that

data and the same

data is going to be coming out or a

slightly stripped back version of that

data is going to be returned from the

Postgress node. The important thing is

that there's a call ID coming out of the

Postgress node and coming out of this

code node because we're pulling from the

combine ids node, we also have the call

ID down here.

Okay, this is then going to allow us to

keep matches, right? So the Postgress

node if it's already processed the item,

it's not going to output it. Okay, so

this then is essentially some filtering,

right? If there's a match between what

comes up to Postgress node and what's

available over here before we've

processed it, then it's going to keep

those matches by combining them,

matching fields based on the call ID.

And that then gives us the items that we

still need to process. So by this point

we have verified that it is an online

meeting. It meets our conditions for for

processing. Keeping the matches. This

then allows us to check if there are

transcriptions attached to those

meetings. We are doing another call to

the online meetings endpoint within

Microsoft graph and we are so we're

using the meeting ID and we are hitting

the transcripts

endpoint and we are doing this based on

the

user because we are using OF2 API or

credentials for the user that

authenticated within NA10. So again

we're going to change this later so that

we can it will work for multiple users.

We have changed settings here to

continue finds an

error. And the reason that we're doing

that is because sometimes transcription

data may not be available via the

transcription endpoint. So for example,

if you use a third party tool to

schedule meetings, Calendarly, cow.com,

whatever it may be, the transcription is

not available via this endpoint because

it saves it in a different location.

Okay, so we've got two paths that come

out here. In this case, the meeting and

question was scheduled by a third party

tool. So it hasn't found a transcription

and therefore it's given us an error. So

we can look at the error output of

node. And if you come down to the error,

it couldn't find the online meeting

calendar ID for the meeting that we are

trying to find a transcription for. So

what we're going to do after some

investigation, some quite deep

investigation, I figured out that if we

transcribe a meeting that was created by

a third party tool, transcription is

still saved within the users SharePoint

folder. So what we can do is we can

actually search to try and find the

transcription that relates to that call

that was created with the third party

tool. So that is what this HTTP request

is doing. Okay. So we're using again

Microsoft Graph. We are looking at the

users

folders actually within SharePoint. It's

where they're saved under the hoods.

What we're actually doing is we are

searching with the subject of the

meeting and the subjects always have

when they are saved to this particular

location, they always have the date and

time at the end. So your results may

vary a bit with this. If you have, for

example, multiple meetings with the

exact same subject on the same day, then

it's going to return multiple meetings.

So, you may want to extend this to

include a time parameter. But for me,

I'm very unlikely to have multiple

meetings with the exact same subject on

the same day. In this case, let's search

SharePoint and it's found that there

actually is meeting that matches. And if

you look at the dates here, you can see

there's a date here at the end and

there's a time stamp here too. And it's

got meetings transcription.mpp4 at the

end. So this is actually record and it's

found. And this is this is actually

hidden. You have to dig, you know, if

you go and search through one drive or

you go and search through sharepoint,

you're not going to find this. It's only

available via the API and it's not

documented. So you have to do quite a

bit of digging to actually find this.

But that's where it is. And here we can

see actually there is a transcription

available. So it's found a meeting that

has been recorded. Now because this can

return multiple results, the data that

we need is within an array. This is

value

array. So we're going to split out the

items from that array. So we have our

individual eating items. And then what

we're going to do is we're going to list

those transcriptions. After some

investigation, we found out exactly

where they were saved. And we can use

the ID of the recording, the drive ID

and the drive ID of the recording to

then see if there's a transcription that

relates to that recording. And in this

case there is. So here we have found the

transcription that relates to that

recording. There's a download URL that

is output. Here we go. Download URL. And

we can see that we can go and grab that

download URL. And the result is we get

our transcription. If this is a meeting

that has been scheduled via Outlook or

scheduled via Teams, it doesn't need to

go through this process. So, we get rid

of a search and then all we do is make

sure that there is a call ID

available. If there's a call ID

available, we can go and check straight

away and see if there is a transcription

available. Okay? And we're requesting

that the transcription gets returned in

a text VTT. So once we've got our

transcription what we're doing is we are

we have a merge node over here to unify

the data. So in all three

instances whether it's coming from our

form or whether it's coming from our

schedule trigger whether we have found

the transcription through because it's

been scheduled via teams or outlook or

if it's been scheduled via third party

tool. So we've got three potential ways

to find a transcription. So what we need

to do is ensure that we unify all of the

inputs so that we can then process them

later on. We're just grabbing all the

data from the previous

nodes that we need and giving them all

unified

names keys so that we can use later. And

then we're using our our merge node. And

then what we're going to do is we're

going to loop over each of the

transcriptions and then we're going to

process them with AI. So we have our

loop. We have our meeting agent and

meeting analysis agents here and the

meeting analysis agent. As you can see,

it's got the name of the meeting

organizer, the email, and the subject.

And then it gives us our full

transcription down here, ready for

processing. We have a very comprehensive

system prompt in here. I'll let you read

over this in your own time, but we have

a directive of what it needs to extract.

We're giving it the output that we want.

There's an output structure that we

specifically want so that can be used

later. So we're essentially getting two

outputs from this AI agent. We are

getting a very structured output of the

meeting analysis and we are also getting

a draft email output, a summarization of

the conversation. Now we're asking the

AI agent here to output its analysis in

JSON. So within the OpenAI chat model

down here, we're using 4.1 cuz it's very

good at that. We've set the response

format to JSON and we have lowered the

sampling temperature down to 0.3 and

that is for consistency. What I'm

actually doing is I'm setting the JSON

using a set note here. Now, so there are

multiple ways to get JSON from an AI

agent within NAN. You could use a tool

and use a structured output par.

However, I find it more reliable to

quite simply prompt the agent to output

the JSON that we need. and it's going to

output it as a string over here. As you

can see, we have our JSON object. And

then if you use a set

node, it's going to take that string and

we can set it to object. And then we

actually get our structured output at

the end. And I find this to be more

reliable than a structured output per.

Personally, I prefer it. So that's what

that's what I use. Here we can see we

have got all the information that we

need to generate the static web page

which the user then clicks on after they

receive the email to have a look at

their meeting analysis. Once we've set

our JSON, there's two things that are

going to take place. We're going to send

the user an email to say that the the

transcription is now available. So we're

using a HTML node for this which then

gives us a nice email to send out and

I'm using an Outlook node to actually

send that email. Now I need to change

this again. This is set up to work with

the authenticated user. We're using a

credential. So I'm going to change this

to work with multiple users and change

it to a HTTP request. And then we are

also generating a static web app down

here which I will show you separately

cuz it doesn't look too great there.

This is an example of the how the web

app looks. You can see that we have our

summarization, our strategic takeaways,

decisions, context, and then we have

action items for each user. We can

collapse these if we like. We can copy

these to clipboard if we want to put

them

elsewhere. We can add it to our

knowledge base, and we can draft a

follow-up email as well. And the

follow-up email will then become

available. It will just sit within your

drafts folder within Outlook

effectively, ready to go. And you can

add more context if you like. You might

want to tell it to pull out some

specific information from the meeting

summarization that you liken that to

highlight in the email. And this gives

you the opportunity to do it rather than

have to change it later. Okay. So we

then save all of that information into

Postgress. We can serve the web app.

Couple of things to note about the web

app. Now this is a static page that

we're serving. So we are using a we're

using content delivery networks here to

be actually be able to serve the web app

in JavaScript can you know we can't use

JavaScript here but we can use a content

delivery network to serve JavaScript

outside of any so that's what we're

doing here this is what gives us the

extra functionality and as you can see

there's quite a lot of dynamic inputs

that we need to grab from the structure

output that's coming from our agent to

create this okay so the next part of

this workflow relates to the web app and

relates to these two buttons here. Now

you can customize this to suit your own

needs. There's two web hooks. This web

hook relates to the email. So when the

user gets the email, there's a button.

When they click this button, it calls

this web hook. This web hook then grabs

the meeting role from Postgress and it

uses the ID to get the correct role

which is generated back over here and in

the email template. It's going to grab

the row from the database and then we

are responding with text and we are

responding with the HTML summary that

the AI agent

created which allows us to render this

web

page. The second part is this web hook

here and this web hook specifically

relates to these two buttons. So you can

customize this to suit your needs. It's

set up to take in a post request. It's

going to again it's going to take the ID

so it gets the correct row that relates

to the meeting. Then what I'm doing for

my specific purposes is I am grabbing

the HTML and using this HTML to markdown

node. Basically stripping out all of the

HTML that we just have markdown left.

That is because we're going to send it

to AI and we don't need to give the AI

all the HTML. It's a it's a waste of

tokens. We don't need it. And then

depending on which button is clicked, we

have a switch. If we press the add to

knowledgebased button, it's going to

come down the note route and therefore

it's going to route up to here. You

obviously could take these outputs and

put them on a different platform. You

don't have to use mem, but it's it's

worth looking at. And all we're doing is

responding to say it's complete. Other

option is to draft our email which as I

mentioned before we have some you can

give it instructions to create that

draft. It's going to send it off to our

agent here going to receive the

sanitized data from that HTML to

markdown node our instructions from the

front end. It's also going to send in

the participants that we have saved and

our post database and then we have some

prompt in here how we want that email to

look. So do come in here and change that

if you need and again we are specifying

how we want that email to look JSON down

here. So once it's drafted the email, we

are following the same process as

previously setting an object and then we

are using our Outlook note here to go

ahead and save that draft and it will

then be available within Microsoft

Outlook for you to review and send out

to your contact. Again, I wanted to

change this so that it can work for per

user which is coming in the next part.

So that is overview of the template.

Hope you found it useful. Okay, so now

we're going to take this template and

we're going to change it so that it

works with multiple users across

multiple Microsoft tenants and give it

you know really kind of upgrade it from

that kind of personal workflow to

something that can be used across

organization and to do that we need to

change the credentials that are being

used because as I mentioned previously

credentials are only tied to the user

that authorize with the credentials in

NHM which is no good for multi- user

workflow which we are going to turn this

into. So the first thing that we need to

do is we need to be able to actually get

access tokens for multiple users and to

do that I've developed an application

which will allow us to do that. So it's

an application which can be deployed on

your own infrastructure can deploy it in

Azor security is a concern and because

you're deploying it in Azor you can

restrict access to internet your own

tenants your own employees your own

staff your own organization whatever you

want to call it. So I've deployed the

application um and this is the

application here. The application allows

us to sign in with a Microsoft account

and the first person that signs in is

granted administration

permissions and you'll be asked to

accept the scopes from your Microsoft

Azour app registration as you sign in.

You can grant for the whole company or

you can grant a user if you obviously

the point of this is that other people

in the organization can sign into this.

And once they sign into it, that's

really all they need to do. So I'm

signed into an admin account here. As a

standard user, you wouldn't see the

users tab and you would only see your

own tokens status. Nontechnical users

can use this. All they have to do is

sign in. They don't need to worry about

anything else. So that's what you would

do. You'd get your team, your

departments, people in your organization

to sign into this application. And this

then takes care of monitor token

management. So as an admin, you can come

in here and you can see what's

connected. You can see the users that's

connected. You can see which accounts

have been connected, one of the

SharePoint or graph. You can see

tenants, tenant ids, the current status

of the tokens. We have a settings tab

here which allows you to grant API

keys and revoke API keys. And we have

some API documentation which you can

have a look over determine what we need

to do. I'll go into this in a bit more

detail. And we as an admin again we have

a user tab here within the user tab you

can grant people admin permission and

you can grant or revoke API access. So

if you don't want to give individual

users API access don't grant it. If you

want other members of your development

team have access to all of the tokens

within the system you would give them

administration

access. Look at the documentation here.

You can see that this endpoint, the main

endpoint,

essentially returns either your own

tokens if you're a standard user,

regular user, or if you're an admin, you

can access tokens for anyone that signed

in. Now, there's an end point which will

call from NA10 which will get the tokens

and it's going to return the tokens in

this format. The access tokens will be

in a encrypted buffer array.

So we have to decrypt those tokens

within NAM an extra level of

security and to do that there's a code

example within this application which

you can copy and paste into

NM. So back over in NM I have created

sub workflow called get tokens. Yeah. So

we're going to execute this from the

main workflow. This is going

to call our get tokens endpoint. I've

already created the API key and saved it

here. So we've saved it as a header off

X API key and then our API key here. And

when we execute this, it is going to

return the tokens and then this code

node is then going to decrypt those

tokens. Now I have saved the token key

as an environment variable. I'm using a

self-hosted instance of N10. So within

my MB file, I have saved the token. And

here we are going to use that token to

then decrypt the

credentials. So if I show you what that

looks

like. So we've run this time returned an

array of accounts. And at the other end

we then have our access token user ID

provider and tenant ID. Don't worry this

would have expired by the time this

video comes out. So not exposing

anything there. We can see our user ID.

You can see that we actually have

SharePoint

credentials and you can see that we have

different tenant ids and different user

ids there as well. Okay. And then this

is then going to be returned into our

main

workflow. So if we come back to our

theme transcription

workflow, first thing I'm going to do

is I'm going to execute a sub workflow.

[Music]

And I'm going to execute

the get token

workflow. Um just do that now. So we've

got some data to work

with. Now I only want well actually

we're going to use both SharePoint and

graph tokens for this workflow. But for

this first step we only want uh

Microsoft Graph tokens not SharePoint

tokens.

So I've added a filter here which is

going to filter anything that's not

provided Microsoft. It's not going to

it's going to filter out the SharePoint

tokens

effectively. I'm going to connect this

to the get profile node. Now what I have

done here is I have removed the previous

authentication that was here and I've

added header authentication. So

authorization bearer and then it's going

to take our access token from the

previous node. So let's have a look at

that.

[Music]

And we can see we've successfully

authenticated with the endpoint and we

have

output two different user accounts, two

different ids across two different

tenants. So the next thing that we're

going to

do is we need to get the core records.

So to get core records, we have to use

this endpoint core records. But this

endpoint is only available with

application permissions, not delegated

positions. So not per user permissions,

application permissions only.

So instead of using the access token for

this because which we don't need because

with application permissions you can

return

userwide records across your entire

tenant. However, the authorization that

we use is per tenant. So this credential

that I have set up here is for one

tenant. So what I need to do um I need

to call this endpoint twice with two

different

credentials to get the call records for

each tenant because I'm using a

multi-tenant setup. Most organizations

you're probably going to need one, but I

have a development tenant and a

production tenant. So I want to get

information from from both. So to do

that, let's have a look at the two API

that's set up. It's set to client

credentials. We have a login URL with

our tenant ID and then we obviously have

our client ID and our client secret down

here as well. So all we really need to

do is change

the client ID. Okay. And like the tenant

ID to get information from the secondary

tenant. So I'm going to call this

twice. I'm going

[Music]

to create new credentials. So, I'm just

going to copy and paste the details

over. So, client

credentials the token URL. Let's

just I'm just going to open this in a

new tab and then we're going to copy

everything

across. So, these

ones and grab this

And I'm going to change this ID based on

what we have in our

application. So I want this tenant ID.

Place that here. Client ID. So our Azor

app registration here. Come to overview.

see our client

[Music]

ID and I have a client secret

saved protected in the password

manager for our scopes. Now, because

we're using application permissions,

it's whatever whatever scopes are

attached to the Azor app registration.

So, we look at API permissions. You can

see application here for call events and

applications here for core records. But,

we only actually need core records for

this one, but

nevertheless, okay. And I'm going to

just name

this. Okay. So, that is our production

tenant. Save this.

Now when I run this we should get core

records for both tenants and I have

changed the datetime value here so it

goes a bit further back in the past we

can make sure that we capture some core

[Music]

data and there we go. We now have core

records from two separate tenants. So

the next one I want to do move this out

the way a bit. Let me tidy up a little

bit. Move this down

here. Grab all of this. Move it out a

bit. Now, what I want to do is I want to

merge these together.

So, let's use a merge

node. Means I'm going to have to change

some things

downream. There we

go.

And pen should work for this. Let's just

check. So we've popped the merge node.

Then we've used

append which will just give

us everything that we

need from both

outputs. Then we're going into our code

node here. So let's just check to see if

there's anything that we need to

do. Okay. Okay, so we've updated this

JavaScript so that it apps all the ids

on the get profile HTTP request. I've

tested this and run it and we can now

see that we've output nine

items. So it's found nine

forms.

So next we need to make sure that

there's a join

URL. Let's do that.

and it's found one item of the

nine with a join

[Music]

URL. So I need to do a little bit more

analysis here because at the moment we

are only returning meetings

from my from one tenant which we were

testing in which is

the dev tenant.

So, let's do a bit of investigation to

make sure that we're actually getting

data from both tenants. Well, we know

we're getting data from both tenants,

but let's just

analyze and see what's going on

here. So, we can see on this side, the

first

item contains everything from tenant

one, whereas the second item contains

everything from tenant 2. So we probably

need to update this JavaScript a bit

more so it goes through all the

items. As we can see here, we are only

accessing the data in position one. So

let's update

this. So we're going to update this

codes. So now not just looking at the

first item. Let's see what we get. 24.

That's promising. Got more items come

out there. And let's just do our filter

for the join

URL. And there we go. And we've now got

eight meetings that have taken place,

which it seems right.

Okay. Now, we're going to get

the

details each of these

meetings. Let's do

that. We got eight

items. And it looks like we've got some

errors, of course. So, again, we need to

make sure that we are using the correct

authorization. But right now it's only

returning details for one user. So we

need to update this

to return details for both users that

we're trying to pass through from each

tenant. Okay. So to do

[Music]

that from our sub workflow

execution we do have a user

ID and we also have user ids for core

records. So we should be able to append

the access token using this code node so

that each item has the correct access

token based on the user ID. So I'm going

to update this JavaScript to do that.

Okay, so looking at the values

here, the input values from the get

token sum

workflow. Come back over to that

workflow. If we have a look at the

output, what we can actually see

[Music]

is we have two we have a user ID here.

We have an account provider ID. Now, we

don't actually want this user ID. What

we want is the account provider ID. We

just want this first part of it to match

what's coming from the get profile row

here. Because if we look at the ID, you

see that ID is a match for this part.

So, we're going to update this

JavaScript within the sub workflow,

which I've just

done. And I'm going to test

this. Now, hopefully our user should be

correct. So, I'm going to save that. And

if we come back over to our main

workflow here, let's just execute

[Music]

again. And we've updated this

JavaScript. Now we should have two extra

items at the end. We have our Microsoft

token and we also have our SharePoint

token because we're going to use that a

bit later as well. So we've got

everything that we need to continue.

So then we are going to

[Music]

filter and then we're going to get our

online meeting details. So for this we

want to change this back to

none. We want to send a

header going to

be the

authorization item. We want

our Microsoft

[Music]

token and we should be able to

get details for each

meeting without any errors this

time. And there we go. We now have

meeting details for both users across

two different tenants. Ready to continue

with the rest of the workflow.

Okay, so now we've identified that these

meetings did indeed take place

online. Let's continue with the rest of

the

workflow. I want to test each step to

make sure that everything works as

expected. And we're going to make sure

there's no

duplicates

[Music]

here, which were that we've reduced them

down. Interesting. Let's just see what

the output of that is. Nothing missing

there from

memory. Interested to know why it's

reduced so much though. So I'm going to

do some analysis here and just make sure

this is nice and clean and

tidy. You've identified that there is

some duplicates coming out of this code

node which we can tell by the IDs being

the same. So I'm just going to have a

look at this JavaScript and tidy it up a

bit. Okay. So what's actually happening

is we are there's two items coming out

of the get profile node here. What we're

actually doing is we're we're hitting

each of these end points twice which is

causing duplications over here which we

don't

[Music]

want. So we want to make sure that we're

limiting what we're doing here.

So, I think we need to just tighten this

up a little

bit. And when we set the time

range, I'm also going

[Music]

to use some values here to tidy

up. So, we could potentially use

the user ID for

that. So I'm going to

set user ID

[Music]

here and then going use

if we need a bit more than that because

we're going to two different

tenants.

[Music]

So if

I if I'm using a

tenant, so I'm going to add the tenant

ID here and we're going to use the

tenant ID split these to routt these to

the right HTTP request.

[Music]

So if

nodes have

[Music]

[Music]

diff see what order they're coming out

in. So

if

we test

this. So we're going to say if the

tenant ID is equal to

[Music]

this and we should get one true one

false and loop the true one here. Loop

the false one here.

Okay. And just give this another

test. See if it removes

our duplicates

issue. There we go. That looks much

better. Four

items. And then we're going to execute

our remove duplicate node. And we've got

four items there. And we're only

removing duplicates that are present in

the current run, not across previous

executions. And that is because as I

explained in the template video, we are

going to do some checking here around

processing. Okay. So let's check our

combine

ids. Check there's no duplicates here.

There's no check the data looks

good. If it

does, then we can hit our check of

process

node. One of these meetings has been

processed

previously in outside of this video.

So, keep our

matches.

Great. Let's see what we've got

here. Now, everything

else is pretty good. Let's unpin this.

And I'm going to execute this node over

[Music]

here. It's telling us that all three of

these

were scheduled by third party

[Music]

tools, which I actually don't think is

correct. So, let's I'm going to do some

analysis and figure out what's going on

here.

So I'm noticing in the output here we

have the same user

ID which is incorrect.

So something else around here needs to

be

changed. Okay. So after some further

analysis I can see within our Postgress

note we have got this set to first. So

previously this was only supposed to

work with one user. Now it work needs to

work with multiple users. So we need to

update this to do

that. So I think what we'll do this

combine

ids node

here we have an organizer object which

has the user

ID which is what we're looking

for. So as opposed to using

our get

profile

here, we will access

the change

[Music]

this

way. I'm going to switch this over to

JSON and I'm going to

say user

ID here.

We're going to get rid of

that and we're going to

[Music]

use user ID

here

[Music]

and we're looking for name the organizer

[Music]

effectively. So the name of the

organizer isn't actually

available from these outputs. So we're

going to have to map it from somewhere

else.

So coming back to our workflow and

looking at the core

records, we can actually see that we

have the display name of whoever the

call record belongs to instead being

whoever organized the meeting. So all we

need to do is map that to our outputs

within this code node and that should

give us an available key to map over

here.

So let me make some more changes to this

JavaScript. Okay. So I've made a slight

change to this JavaScript. So we include

the display

name. Let's just have a

look the

output. And there we go. We have

got the

[Music]

organizer. Yeah. Organizer display name

in the output.

Great. So just run the whole thing

again up to this

[Music]

point. Okay. So slight error there on my

part. this the output of the combine

ids cuz we're including all other fields

is actually coming from the online

meetings request not from this code

node. So we need to join the data from

the code node and the online meeting

details HTTP request. So we should be

able to a merge node. So let me analyze

that and I'll come back to

you. So, let's have a look to

see what's

[Music]

common. We're

[Music]

using join URL to get the meeting.

There's the join URL. Yes, we have that

here in the

output. So, let's see if we

can

merge in here.

And we'll take the we'll do

that.

Then we will

say

combine and we say fields to match

on. So from input

[Music]

one I think about this. We've got four

items there. Bunch of different items

there. Probably want to split them out

first before we do this.

So, get rid of that. Get rid of

that. Let's move this over

here. Pop this

[Music]

here.

And going to use

our join

URL. Let's just have a

look. Got 12 items here. So, let's do

another split

out. Apologies if I'm going a bit too

fast. Do feel free to slow

down the

video. We're going to split

out. We don't need to because they are

not in an array.

Let's just map this straight over here.

Input one. Input

two. We got our join

URL. Not nested

anywhere. Let's see if that

[Music]

works. Not quite. Let's see

why. Okay. So, these have slightly

different names. We've got join URL and

then we've got a join web URL.

So we're going to say these have got

different

names. Input one is join

URL and then input two is join web

URL. Let's

see. There we

go. Then we've extended the

outputs. See if we now have our display

name, which we do. Great.

We'll move this over here. Not changed

the structure of the data. We just

extended it. So, nothing else should

be good. Let's just check over

here.

And just need to make some changes

[Music]

here. Just need to make sure that this

merge node is doing what we want.

merge both inputs

together. We are

keeping

matches. And for clash

handling, I'm going to say let's prefer

input

one.

And just run this

again.

Okay, that's okay. I think. Okay. So,

I've came back over to our marginal tier

and I've switched this from a deep merge

to a shallow

merge. And that means that we can retain

the structure of the data, which means

that we no longer have an error with the

participants. So, I'm going to switch

this back to as it was previously for

consistency.

Let's grab the organizer

ID principal name. This is

the So, we may have just wasted a few

minutes, a few minutes there by getting

the display name because it's actually

the user email we're looking for. But

nevertheless, we needed to split

everything out. Let's go into

participants organizer and let's grab

the user email. They're going to be one

organizer for the meeting.

Okay.

Right. Let's run this

again. Got our free

[Music]

items. And now let's see what we have.

Hopefully we should have

different user

[Music]

IDs.

point and that is

because we are not these roles already

exist in the

database. So I'm probably going to have

to clear them

[Music]

out. Let's execute this step

again. Still got our free

items. I'm going to check what the

outputs are from the database.

Yeah, now the users changed. This is to

be expected because I would expect this

meeting to come from the production

[Music]

tenant.

Okay, now let's run our merge node over

here. Now let's see what's happening

with our of course we need to okay so

now that we have got the correct items

coming out of the merge node we need to

update the transcript data node so that

we are using the correct tokens

effectively

so same

process we have our token data here now

I'm expecting N810 to possibly get

confused not knowing which item to use

if I map it directly over here but uh

let's give a shot and

see. Okay, so the authentication tokens

have expired.

[Music]

So let's run the whole thing up until

this

point so that we can get new new tokens.

So within our HTTP request for get

transcription data, we've switched this

from the previous permissions that we

had within NHM and we are now sending

the authorization access token

authorization header with the access

token from our combined nodes which is

actually working fine which is good. So

then we can see we are outputting two.

So we found two transcriptions there and

we've errored on one. So one must have

been scheduled via a third party tool

which means it can't find transcription.

There's no meeting object or meeting

that was scheduled with third party

tools. So this as I mentioned in the

10point overview. So we're going to

search for that down here. So we know

this one's going to work. Need to change

these permissions. But here we're going

to search for the online

meeting. Now of course we're going to

have to change this to use our what

we're looking at. So this is graph as

well. So absolutely

great. I guess what I'm noticing is that

the date value is wrong here. This is

meant to work and search on the same day

that the meeting took place. So,

actually the date value here is is

incorrect because we're taking a new

date for the date the workflow was run.

So, that's no good. So, I'm going to

have to update this to use the date that

the meeting actually took

place. So we can actually get that from

our merge

nodes over here because we have got the

meeting start. So instead of using just

new

dates

[Music]

expression get rid of

that and replace it with meeting

start. And then we're going to have to

just

[Music]

modify the

[Music]

expression. There we go. Now it

[Music]

should find the meeting. Brilliant. So

we've now so even though it's not

available via the transcription

endpoint, we have now done a search and

we've still been able to find meeting by

searching. Essentially what we're doing

here is we're searching um SharePoint on

one drive for the video of the meeting.

Okay, let's continue. So, we need to

change all of these to use the correct

authentication.

Now, okay, so this is where we're

looking at SharePoint. Now, there's a

couple of things that need to be dynamic

here. We need to make sure that the URL

is dynamic and we obviously need to send

the authentication token as well. So,

let

me have a think about

that. Okay. So what I have identified is

that when we search for the

meeting, we actually have the web URL

here. So we can use that our list

transcriptions request

to essentially we want to replace this

with first part of this web URL here. So

we're going to need an expression to

strip

[Music]

anything after this part.

[Music]

drag this over

here. And I'm going to ask chat GPT to

update this expression to just remove

this last part

here. So chat GPT has given me some

lovely reax here to extract the bit that

we need.

[Music]

We're using a split out. So that should

be absolutely

fine. Just then we need to change the

authentication. So this time we need

the

SharePoint

[Music]

token. Head down to combine

ids and we'll get our shareepoint

token over

here and we'll hope this

works. I need to add

[Music]

bearer. There we go.

We now

have our temporary download URL or this

hidden

transcription. This endpoint is quite

hard to find, but it is there. Now we

have the download URL for the

transcription that wasn't available via

the transcription endpoint.

Oh, and of

[Music]

course, of course, we need

[Music]

to get rid of

[Music]

this. Yeah, you don't actually need

authentication to get that cuz it's a

it's a temporary

short-lived

URL. And now we should be able

to run all of this up to our merge node.

Got two

items

unformed,

normalized, which is

great. Next thing we need to do is

update

this Microsoft Outlook node into a HTTP

request. We can't, unfortunately, the

only downside to doing this is that we

can't use nodes which require

credentials. So we have to use HTTP

requests. Not a big deal because we can

just import it in code. So all good. So

I'm going to run this. It's dealing with

quite a lot quite a large amount of

data. I'm just going to put temporary

whilst I'm testing. I'm going to limit

this to one. So all I've done there is

I've given this to chat GPT and told it

to give me the equivalent in

curl. And there we

go. That's all you need to do. And then

you can see it's even filled out this.

So we can put our access token

here. We probably will need to update

some of

these expressions to match what we had

previously. Let's do that. We

also probably just want to make a change

to what's coming out of this merge

node because we're going to need our

token.

for the HTTP

request. Okay. So for these

roots, we need our

Microsoft token. We've got email send

request and the app registration.

[Music]

So, or do we maybe we don't we might

need to add a mail read mail send. Yes,

we do mail send. So, we're

[Music]

good. Add

a

token. Going to come back down to our

combine

IDs. Grab our Microsoft

token. All right.

That over here.

[Music]

Now, obviously, this is going to break

our form route up here, but to be

honest, it's very unlikely that this is

actually going to be used. Much more

applicable for personal. Going to leave

it there, though, in case I want to use

it in the

future, but it's probably highly

unlikely. So, I'm not going to update

this one. You could you could connect it

and do that if you wanted, but I'm not

for the time being.

[Music]

Okay execute

this. Make sure that we have

our

[Music]

token. Okay, and that's give me the

opportunity to execute this

again. And then for each of the

outputs should have an access

[Music]

token. So we are going to move

this replace this

here. No, we do not want to create an

infinite loop. That would be

[Music]

terrible. And um take this

request. You know, I just want to do a

comparison

with what we had here because we're

going to have to

update these expressions. Yeah. Okay.

We're going to want to extend

these includes the

email. So, where can we do that

from? You say email. Great. We take it

from the merge

[Music]

nodes and we'll just

say use

[Music]

our email

and execute to this

[Music]

point.

[Music]

Okay. So now we

can access

[Music]

the user email

from the

[Music]

loop. Save

this. Okay. So our HTTP request here

failed.

We've formatted something incorrectly.

Let's have a look and see what that

could possibly

be. My guess would be the HTML that's

going on

[Music]

here. Yeah. So, we're going to need

to we strungify

[Music]

this JSON string.

[Music]

That fixes

[Music]

it. It's running the analysis again. I

should have pinned this.

So, okay. So, now we're getting a

slightly different error because of

course I haven't mapped

the token. So, let's do that.

token mat and this time I'm going

to

pin AI

node and execute

[Music]

again. This time I sent the email.

Great. But we can see

our save

data didn't

work. Have a look at my email

inbox. And I can indeed see the

email. Let's figure out what's going on

with

our Postgress

node. So we need the user

ID. It's due to some database

[Music]

restrictions. So, let me sort that

out. I need to get the user ID from

somewhere. Again, I want it to come in

the loop so it's nice and

[Music]

clean. So, I'm going to map the user

ID from somewhere else.

in the same way that we have done

previously. We got the user ID

here. So do exactly the same

thing. Okay. So I have now mapped the

user ID to both of

our set nodes. We just run it

again exactly the same as before.

[Music]

our usual ID went into the

[Music]

loop.

Great. That's like our save

data. I'll take an

expression. We'll go to the

loop and we'll grab

the user ID.

test

[Music]

this and we're good to

go. So, we have

now revamped this workflow for it to

work with multiple users. uh as long as

the user signs in via our token manager

or any other solution that you want to

use as a token manager, we can get those

tokens. We now have a multi-user

workflow that can be used across the

organization. And that's a wrap. We've

gone from capturing Microsoft Teams

transcriptions all the way to detailed

participant aware summaries, drafting

follow-up emails, pushing everything out

to a knowledge base, and most

importantly, you've seen how to take

what's normally a single user automation

and scale it to work securely across an

entire organization using our MS AutoF

365 platform. If you found this useful,

check out the links in the description.

The single-user version is on the NA10

template marketplace. The multi-user

version and the authentication app is

available via our blog. And of course,

more examples of how we're using AI and

automation to drive real impact inside

business. If you have any questions or

want to see how this could be adapted

for your own use case, feel free to get

in touch. We do have a community over on

our blog. Feel free to check that out

and join. I am active in there. I look

forward to hearing from you. So, thanks

for watching and see you in the next

Loading...

Loading video analysis...