LongCut logo

Season 4 Episode 1: Biggest Risks (and Possible Rewards) of AI in Clinical Data

By Veeva Systems Inc

Summary

## Key takeaways - **AI Flags Audit Trail Anomalies**: With the new ECH GCP R3 release requiring audit trail reviews, AI can automate detection of unusual patterns and signals in millions of records that humans can't manually review, raising flags for action instead of endless Excel filtering. [03:07], [03:44] - **Optimize Queries with AI Patterns**: AI can analyze patterns in queries and data changes within a study to identify unnecessary queries that rarely fire, allowing adaptation during the trial to reduce wasted effort. [04:09], [04:32] - **Non-Deterministic AI Needs Oversight**: AI's non-deterministic nature means it may give different answers with the same inputs, requiring humans to watch, understand changes, and intervene to maintain trust, especially in regulated spaces. [08:49], [09:50] - **Human-in-Loop Risks Complacency**: Placing humans in the loop can become a rubber stamp if not empowered properly, leading to laziness and loss of instincts over time, so systems must ensure real value or risk liability. [14:07], [15:10] - **Standardize Protocols for Zero-Week Builds**: Standardizing protocols using standards like CISK USDM and M11 enables AI to generate the entire eCRF and data cleaning rules immediately upon sign-off, achieving zero-week study builds. [28:41], [29:20] - **Simplify to Avoid Additive Layers**: The industry has added layers like excessive edit checks without simplifying, so AI should strip away unnecessary old layers to focus on meaningful data collection and analysis. [31:52], [32:15]

Topics Covered

  • How can AI automate audit trail reviews?
  • Why must humans stay in the AI loop?
  • Can AI enable zero-week study builds?
  • Simplify trials before adding AI layers?

Full Transcript

[Music] Hi, welcome to Unblinded, a clinical trial podcast.

I am Manny Vasquez, senior director of clinical data strategy here at Viva. I'm joined today by a fantastic group of guests.

To my left, we have Mr. Robert Bergen from Bear, my colleague Drew Gardy here from Viva, and from Clinflow, Mr. Doug Bane.

Thank all three of you for being here today.

So, Robert, why don't we start with you?

For those out there who do not know who you are, would you mind doing just a quick intro of yourself and some of your background?

>> Sure. I I'm currently running a shop called Clank Digital Innovation at Bayers Clank Development and Operations and I've qualified for that position by being statistical programmer, data manager, standards man, system integrator for years doing a full gig in in in our corner.

I'm right now behind introducing Viva CDMS at our company making good progress there and uh glad to be here.

>> Great. And Doug, would you mind the same?

>> Sure.

>> So I I am people call me a veteran I suppose especially Drew.

I've been in the industry for a number of years.

Uh mostly from the software vendor side of things.

More recently though, I I worked at a CRO for five years as chief technology officer and more recently I founded a consulting company Clint with an aim to focus on the gap between the technology and the clients.

Something that I've seen for for a number of years. um really good technology can be applied but applied in an inappropriate way and it doesn't result in the sort of savings and the quality that some some sponsors expect.

So really I'm working on that uh interchange making sure the technology works.

>> Excellent. Well, thank you both again for being here. One of the first things I wanted to put out to you guys is just to start this conversation going.

Doug, you'd mentioned that in some of the sessions today, you're hearing some really exciting possibilities about what AI could be, what we could do with it, and you know, maybe some things that hadn't thought about before. So, I want to start off talking about what are the real possibilities.

I would really like for us to stay grounded in reality.

I I want to be very cautious not to start throwing AI the AI label on everything and just saying, "Here we go.

We've innovated." like what are the real use cases that we can sink our teeth into to deliver some legitimate value in the short term.

>> Well, I think the the first point is it's value.

You say the word value, you know, it's so easy to go that's cool.

Let's do that. But it's not just what's cool.

I mean, we like cool, you know, if you're all techies.

But it's about first of all, hold on a second.

What is the real value?

you're trying to identify a problem and you're applying AI to solve that problem.

So I do think there's there's a number of areas you know I think when you've got a database databases hold information and it's been able to leverage that information.

So with artificial intelligence you can take the general knowledge but you can supplement that knowledge with information in an application database and all of a sudden go oh with the new regulations there's a requirement uh you know 20 uh the new ech GCP R3 release makes it clear that you should be reviewing audit trails now nobody's going to review an audit trail millions of records you look well that's an interesting record there or something but it has to be automated and AI is in the perfect position to be able to see that looks unusual, you know, through patterns, signals, and so on.

So that's to me a very obvious one.

So instead of people thinking, "Oh my goodness, I'm going to have lots of listings.

I'm going to Excel spreadsheet doing lots of filterings.

" You're getting AI to say, "Right, I've got patterns to look for. I'm going to detect these patterns and raise the flag and then actions can be taken on that.

" So that that was an obvious one.

So it's leveraging the data, existing data.

There were obviously there's some advantages having longer term data if you're going back a number of years, but you can even within a study is being able to look at patterns of query and data change to go, hold on a minute, you know, this query's been raised x number of times, maybe we shouldn't be raising it anymore.

We've talked about unnecessary queries.

Most queries don't fire.

you actually use AI to go, yeah, you don't need that. And you know, >> so I think there's some interesting look at at the data, the live data as you're going through a trial to say, we can adapt here.

>> Yeah, the audit show review is an interesting one.

It's been one of the largest topics of conversation in the last six months, ever since the, you know, the guidance dropped.

And so, Robert, I'm actually curious within your role in the sponsor setting, how how much has that kind of raised alarm bells for folks?

We think I think we have a procedure about reviewing audit trails and we don't really do that right now.

So it will hit us. Yeah. And I you're totally right.

This is a good good use case example.

I I think good use cases are those where AI and that's the the the bottom line strengths can do things faster than people can do.

So anything that requires a lot of studying, a lot of review, a lot of reading, a lot of combinations.

And I would I would point for lowering foods into document generation, document consistency.

So the the the documentation side of the world, the data side is certainly an interesting one queries. Yes, of course.

When we have a digital protocol one of these days, generating the EDC uh creation from that, generating what the clinical study report minus the results because we don't know them yet, but we can wait for them and fill them in.

I I think uh the next years will be very interesting for building such automations that are based on a a a tremendous workforce called AI.

Yeah, look forward to that. What what that does to the job market is a different story.

Yeah, that might be a different discussion too.

>> It's an important discussion.

>> It is socially. Yes.

your thoughts on this from a technology perspective being you know within a vendor we hear a lot of noise about what people would like or you know I think some of the some of the content today was around I think trying to articulate what AI is versus what AI is not >> and so I'm curious if you could like pull on that thread a little bit to just kind of help you know people understand what it is we're we're really talking about.

>> Yeah. So um I think to lead into the that question and and let's use this as an example.

Audit trail reviews a new requirement should have actually always sort of been there and why because it's behavioral detection.

The problem with our methodology is that in order to do that you mentioned millions of records.

It's nothing that a human can do well without tools.

And of course then humans do well with accumulated data in mass through visualizations, seeing detection, signal detection, statistics, identifying those things.

But what's interesting is if the goal is to identify behavioral anomalies and issues, why would we do that once a month or once a quarter, right?

We actually need to get in front of it.

The goal is to actually improve quality.

So you have to detect behavioral issues sooner.

Now what would we do?

Have humans review that data every single day looking for anomalies? No, we need automation.

So from my perspective getting into, you know, it's sort of the an additional tools that are available because you can use a variety of techniques to automate the detection of anomalies in your audit trail.

Statistical analysis works, right?

we have to just write programs to routinely look through that data.

But leveraging LLMs with good training data sets in concert with as Doug was saying a reliable data set and that brings together that signal detection.

It's just another methodology that can be employed.

I think what matters from an architectural perspective is maturity.

We are at a very in an explosionary time of new techniques and capabilities but really we need to have mature robust solutions and we're shifting from a deterministic model where meaning of that is effectively you always get the same result logic if then else statements to something where it's non-deterministic.

It may give you different answers with the same inputs. How do we work with that?

How do we relate to that?

There's a human machine relationship that actually still has to be worked out, especially in a regulated space.

And so it sort of leads us down this path of new techniques require probably new foundational understanding and new methods to work with the new tech.

And that often takes time to mature.

So you have a proof of concept of something.

This could do that, but actually it's not yet viable for use in a compliant manner.

So we have and that's where our industry struggles right is we can be on that bleeding edge but we will bleed.

I >> think it was Ray who put this up in in his show today about which data manager would want to live with getting a different answer every time the question is asked and I I didn't raise my hand.

I had to think about it but I I would be happy because it has a has a reason.

There's obviously a learning in the background or new data. So if a result changes, I would like to know about it and I would like to find out why, understand it.

And that's probably the the thing you you can't let an AI go alone.

You you have to watch it.

You have to understand what's happening and you have to intervene if needed. But I I wouldn't mind if my query result would change from day to day based on the situation behind.

I I I would mind having to watch query results in general, but that's a different story.

Yeah, I think it's it's going to be a challenge with the the privacy concerns.

You know, there's there's this you've got the public loss that are continually learning and then you've got more of the private ones that are basically almost like a snapshot in time and any information knowledge they're gaining is is almost controlled. You're saying you don't want some bad actor, let's say, influencing the decision-making process for your clinical trial. Not maybe even obviously, just, you know, just by whatever mechanisms populates that data.

So, there might be a better approach to say actually we're we're going to continue to evolve this model in some form of controlled fashion, you know, populating it with information that we consider safe.

So that you know they go back to that example of of edit checks where you're getting a different answer.

Well, you only want that answer to some extent to be different based on the data that it's getting in that trial and not potentially data coming from another trial, you know, for example, you know.

So, I think there's there's things we have to learn and it's a case of, you know, learning improving learning improving and not not stumbling and not being bleeding edge as you put.

I I think there's a a difference between asking a technology replacing human logic and answers to replace the human decision-making process.

I think it's different.

You'd be fine probably, I'm assuming, if it were offering you suggestions, guidance, insights, but if you if you had an employee who gave you two different answers with the same data set, you'd probably grow tired of that person and you'd lose trust. And I think there's this moment where you're comfortable articulating, right, that you'd be fine with a certain lower trust level in certain ways, in certain areas, but probably not the decision and and that's controlling an outcome. And I so I think that there's a there's a a nice on-ramp into this as we build trust and understanding, which I absolutely agree with.

uh that on-ramp to making decisions is suggestions and insights, doing the research, heavy lifting for us, getting us those those answers.

I think that that is the path forward.

So that notion of human in the loop, human in control, especially in a heavily regulated area where someone probably at this time still needs to be accountable that we need that we need that methodology.

>> Having to develop the trust in in the result.

>> Yes. Yeah. Yes.

Statistical statistical trust.

>> Yeah. There was there was a meeting I was at a couple months ago and this was a a regulator from from the German regulatory body. really nice guy was kind of not I don't know I don't know if he was challenging the idea of human in the loop but he was it kind of put the question how how many other things out there could we automate if we wanted to like train schedules trains you have Montreals and things at airports that run autonomously you could do that but subway trains train they have conductors why we could you could make that go automatically Okay.

So why does there have to be a conductor there?

Because for the same reason, right?

Like for the one time you need them there, you need them there. That human has to be there.

Is is there a point do you think within any of the AI work that we'd be doing within our industry that you get to the point where we actually feel comfortable pulling removing that aspect?

>> I would actually I would go further actually because I mean one of the risks in human and loop is it's almost like a tick in the box. So let's say AI delivers like a big report over to someone saying right time for you to review it, you know, it'd be so easy to say, "Yeah, yeah, that's fine." Right?

And it's more a rubber stamp.

They're actually not reading through and checking the content. So I think it's it's actually the duty of the the systems developers to say, "Right, we're putting the human in the loop. How do we ensure that this human is empowered to appropriately look at and make decisions in that and not just be rubber stamp you know meeting the regulator requirement there we're we're good you know we really need to be careful that we we have value in that and I think if we if there's no real value proven no real value that should be taken away because it could be a liability potentially >> you're you're really stating that we have the risk of becoming lazy Yeah, I mean there's there's public >> engaged because if you're not doing if you're not in the detail like as data managers, we live in the detail.

We we feel it when it's wrong. We we know we have instincts associated with it.

But if we're not doing the work, what happens to those instincts over time generationally, you step away from it.

It's >> complacency, right?

>> It's complacency. But we've had those fears over and over and over again and the world still continues on.

We felt exactly this tech is going to do damage and harm, but humanity survives and keeps using it. It's a new tool set.

So to your point, what does that look like?

I don't know.

>> I I would actually challenge whether the my my average Berlin Uber driver is better than a self-driving car. Yeah.

From the risk perspective. Yeah.

I'm not so sure. Yeah. So, and the same is probably true for what for train conductor. Yeah. In in in the Berlin Uber network, is that needed? We we still have them.

Every train has has a person on it.

Does Why is that? So is that a is that just tradition?

Is that risk management?

>> You've made a really good point there is it's about risk >> risk management. That's exactly right.

>> You make an assessment and you basically log that assessment. And I think all the AI human the loop situations you really should go right we're making this assessment.

We're formalizing it and we've decided that because of these scenarios we don't need to human loop.

Right? I think if you just go, well, you know, we we had a quick chat and it was okay and therefore we didn't view it.

No, I think the regulators are going to go show me where you reached a risk assessment for that one.

>> Yeah. I do I do think we need to apply though quality control because you have the risk of hallucination from data.

Right? So having human in loop if we take that out which I agree with can do be done in certain cases. um where's the quality control that at some point if you're just allowing full automation and data existing data to drive action and you're using new you're not just stat on a stuck on a static data set but you're allowing it to grow who's making sure it's right and so that's managing hallucination and bringing that back in I do think we become verifiers of accuracy at some point >> maybe that maybe that's a model you know you've got test driven development in software engineering >> right Yeah, >> maybe we need something similar on this is what we expect.

>> Yes.

>> Did it do it >> right?

So it's almost like a test approach up front for everything you do with AI.

>> Every fifth whatever it is or every 50th do you have the consistent results that you expect?

>> Monte Carlo approach too.

>> Montalo approach.

You're absolutely right. Yeah. Yeah. I like that.

>> I think a lot of what we said around it's about like proactivity, right?

a lot if we come back to the audit trail review that's a lot of the very little bit of additional guidance that I've heard regulators give around you know how to go about doing this has been about being proactive and not not reactive and so I we talked about you know having AI to be able to help sift through the millions of records there's also some tools that have been thrown out there that is putting a large language model on top of large data sets like we'll use autotrail just to keep with the theme But, you know, anything that's sitting on top of a large data set to then give a user the ability to go and interrogate that. And so, I'm curious if you guys think like I feel like that's more of a Pandora's box, right?

That's more reactive, right?

The data is there. Now, you're just going to go and kind of sift through and ask questions about what may or may not be in there, but then it just kind of opens up the world of like anything is possible.

How many questions could you realistically ask it? And you know, when you're talking about audit trail review or something like that, h how much of the weight has to be put up on the upfront proactivity because I don't know if that's even a practical, you know, review that could be done by someone and call it thorough. You know what I mean?

>> I think you need you need to use the tools to answer questions that you know you have first. So uh there have been many suggestions in the industry about what we what questions we should answer through audit trail reviews and I think those take paramount because based on experience we know what behaviors we're looking for.

The only way to get in front of the problem is detected faster and the things that we are trying to identify and then on top of that sure explore allow systems to explore but we don't want too many false positives right we've got to look we actually have to be smart about that and not be wasteful with time by creating new new problems that really aren't critical >> on the other hand nothing that we find should scare us or keep us from fighting it right >> no for sure >> we we we looking for the Friday afternoon syndrome on in our audit trail, right?

Because we know it's happening.

>> I love that you have a name for that.

That's actually really fun.

>> But I let the AI explore what's in there and let us judge whether that's the case or or not. Yeah. And if we if we extend this from auditory to real clinical data, we might learn things we didn't see in the first place, but uh it's for the better of the patient and and the world.

So why why not accept the risk of learning something your product manager wouldn't want to know? Yeah. I I think ethically we we should exploit that where we can. Yeah. Which u of course leads us to the fact of where are we starting.

Yeah. We we can't do it all.

We shouldn't do all cases with AI.

Have you come across that enthusiasm that wants to do everything with AI and you gota you got to regulate >> you might be investing something into something somebody does once in an afternoon in a year. Yeah, I think there's a change management challenge.

You might sort of force it by saying, look, you're going to go AI first, right?

But that's really just to encourage, you know, the the whole organization to go right, let's try it.

Let's look at see what that works.

Otherwise, people just going to go, "No, no, hands up. I'm not going to try it.

" So to me, it's, you know, AI first isn't, you know, it's not getting rid of people.

It's about trying things and learning and, you know, evolving an organization and it's just a strategy to do that.

You know, some people are doing it, some people aren't. And I'm I I don't think it's a great strategy because I think it terrifies certain people for the wrong reasons.

>> Yeah. I think there's a there's an interesting juxtaposition between a lot of you know conversations that we have within Viva with with customers or potential customers about what change management could look like and for for a lot of the things that people should be doing at least in our opinion there is always a hesitancy for for change because of the you know the effort that comes with change management but when you look at something like AI and it's this new thing where we would say be very pragmatic in your approach, you know, pump your brakes and just be very thoughtful about how you go apply this.

This is something that a lot of people seem to be really gung-ho about just going straight into it and that they're happy to go down the change man.

Well, whatever comes along with it.

So, it's interesting that there's this bit of justosition between things you should do that you're afraid to do and things you probably shouldn't do yet that you just can't wait to go down that rabbit hole.

>> Back to being a teenager.

>> Yeah, very much. we get very excited and we're gonna sprint and run that skateboard right into that brick wall.

You know, it was a great jump, but you know, it didn't quite land well.

I think the the key to me though is look, we we need to explore and experiment.

It's what percentage of your time do you do that to what benefit value? And as long as you're targeting the areas that would benefit the most, but are also realistically delivered in a in a reasonable time period, that's where the investment should go, right? Um, I don't think it's just pie in the sky, explore everything because that actually wastes resource, right?

And as an industry right now, we really can't afford to waste resource.

I'd rather have those minds working on things that matter most to to buyer or other organizations.

So what what does this look like in practical a realistic implementation you know when it comes to bringing it in having some structure and governance around it and I understand that you know we have yet to see the full regulatory landscape or what this is needs to look like and what kind of barriers we need to stay within bounds but just for what we know right now you know what would be maybe within bear a way that you would practically try and bring some of this in, some things that people could think about to say, maybe I could start to dip my toe in this water.

>> We're doing something about that right now.

Um, I'm sorry, we're probably curving some enthusiasm of people, but that's not the intention. It's really to to keep the enthusiasm and channel it to something useful.

So, we've we've u we've created a funnel. We've created a way of telling us the idea and telling us what it would be good for. And we we assign some stuff that goes back to people who say, "What do you mean?

" Yeah. And what is this good for?

And can you can you give a number to it of whatever kind?

And we we won't regulate every attempt of doing something with AI.

That that would be the stupidest thing of all, but we'll it's got to stay under a certain threshold of spend, let's say.

Yeah, maybe be time or or other resources, what what have you.

But when it goes above a certain uh line, then it it's got to get a business case that is worth for us seeing and fits our strategy, too.

Yeah. And it's not risky in terms of compliance and all of that.

So, we have to have some good people looking at the cases and say, "Yep, good one.

Go.

" >> Doug, I imagine you have this conversation quite a bit with with potential current clients.

>> Yeah. So I mean it is fascinating and I mean I think it's fascinating in the the life cycle of the application of AI.

You know you've got the companies like Viva who are saying right we're looking at our products and we're looking at AI and we're going to see what we can do to apply it and to embed it. But you also have the sponsors the users of of software or or not users of software that are going oh we're going to bring our own AI in and sort of bolt it on.

And I think you've got a similar challenge that you would face forgetting AI.

You need to be careful that you don't end up with pieces all over the place.

You know, technology AI and for the business to be relying all of a sudden on this super duper widget pulls information from a questionable source and does something that's going to change your business without the control.

So I I think you know you should look at the I mean I don't like to use the word validated but I will validated implementations of systems with AI and make sure that you know you you do something similar within your internal organization.

It's not just throw AI at it and it'll be okay.

You need to really see that as as a technology.

It should go through similar life cycle processes.

you know, obviously confirming value, but also concerning validity in what you're doing.

>> Aside from what your overview, is is there a specific use case that really excites you guys?

Protocol simulation is what comes to my mind.

Yeah. So, that's something we're looking into.

verifying the ideas that we have against not only the data we have but the data we can acquire or use which is often brokered or or serviced. But uh seeing that we understand what we're doing to sites patients and our data when we set that parameter to right and that one to the left and see what it does when we change that.

And is that any better in terms of probability of success or timing or um executability in at the sites?

That's something we I'd be interested in.

How far is that AI really?

Is it's based on large data models.

Yeah. And it has certain models behind to interpret and uh it's beyond my capabilities I have to say, but I know people that do this for their profession and that that's something we're looking into. It's certainly a case where that you could do at home in a million of years, but AI could look at the 350,000 studies worldwide and tell you that this is an average idea at most.

>> How about you, Doug?

>> Yeah, I'm going to be a bit more contentious here.

So, I I actually wrote a blog post.

uh it was in the 1st of April uh and it was about how AI was going to take data from EHR and produce a clinical study report.

Uh and of course it was an April Fool, but the the principles actually you need to be very sort of ambitious in your what you could potentially do and who knows maybe that would happen but one I I do and I've got passion for this apologize Drewve I've raised it before you know to me it's a principle of garbage in garbage out in the lifestyle clinical and trial and the garbage at the beginning I hate to say it but it's a protocol so the protocol is a document Right?

It's got lots of words and we've just got used to this principle of giving a protocol to each of the departments.

You know, bioats will take it, data manager will take it.

Yeah, they can contribute to it, but basically, you know, go and knock yourself out with it. And we implement studies and we run studies based on that.

And that variability AI can maybe do something with it, but it's just it's going to be a confidence factor.

You know, it's 80% there. Now, if you can standardize that, now that's going to make a difference if you could standardize it.

And there's the CISK USDM standard and the M11 standard um the uh DDF standard. So they will combine so that you'll sit down and in principle I think you could sit down and you could define a study with AI supporting it with information from different sources to say right that's a study and what I think we can eventually achieve and this is a bit radical is a zero week study build right yeah so basically you'll gather information that'll go hold on a minute I can create the whole ECRF and I can put in place all the data cleaning rules that are necessary and there you live.

So once you sign off on that protocol, you've got your study build.

>> I know we're up on time, so I think I'm going to throw a quick wrap-up question at you guys. What would be your current call to action to the industry on something that they need to do or should be thinking about doing to to progress us forward, right? It could be as broad or as specific as you like. It could be as friendly or as provocative as you'd like.

If it's a specific functional area that you need to call out just to say, "Hey, we need you to do XYZ in order to move things forward. If you could call an action out, what would that be?

" There's a couple of obvious ones on digital data flow and automation and uh on on the tri simulation and document generation and and these are clear to all of us, but they all optimize processes inside a sponsor or inside a CRO.

I would call for look at what we can do for our patients using AI on our data on on our processes on on our way of of running studies to a successful submission.

Yeah. So think what what's in it for them. Yeah. And focus on that.

No idea where this is going.

But I'm shooting for that moon.

>> Great one. True.

Wow. So we I wish we had a session on this.

Um for me it is actually foundational and it's focusing on what matters most.

You mentioned patients, you mentioned sites right through this conversation.

I think there's a realism that you know the reality is adding complexity to complexity does not usually lead to simplicity and streamlining. Right?

And even if it does, the next layer is how could we ever fix or understand when the next layer is needed. So I do think we have to simplify, we have to standardize to really and that's at the trial level, the design of that trial, making sure we're collecting meaningful data that's actively analyzed.

So again, focus on basics.

It's a time of explosionary ideas, techniques, tools, but at the core, we have very clear business challenges that we need to be focused on to solve and we use the new tools in conjunction with the old methods.

I think we can do a lot together immediately.

>> Excellent. Doug, final word to you.

>> I I love the idea of of simplification.

You know, I think we are unfortunately guilty in this industry of being additive.

We keep adding more and more layers.

you know, we'll go back to the, you know, beginning of writing edit checks and we thought, right, we'll be edit checks and everything.

Nobody, nobody will need to clean the data.

Uh, that didn't work. So, we added another layer, another layer. So, I think it's time hopefully with AI to say, actually, let's strip away some of these old layers.

We don't need them.

And simplify it. That will, by the way, take some regulatory changes.

And I'd love to see some of the changes, but ultimately I'm I'm a big fan of simplify.

Excellent. Well, Robert, Drew, Doug, thank you all very much for your time.

Appreciate you being here. I hope you enjoyed that episode. If you did, please subscribe to the podcast and leave us a review.

It really helps us out.

You can find previous episodes on Apple Podcast or Spotify or anywhere that you get your podcast from.

If you'd like to reach out with any feedback or thoughts or if you'd like to participate as a guest on the podcast, you can reach out to unblindedpodviva.

com.

For now, I'm Manny Vasquez.

We'll see you next time.

Loading...

Loading video analysis...