LongCut logo

AI Is Changing Healthcare — But At What Cost?

By AI and Healthcare

Summary

Topics Covered

  • AI Is the New Electricity—With Hidden Environmental Liabilities
  • The Wild Wild West: AI Regulation Hasn't Caught the Technology
  • Tech Companies Get Reimbursed if They Improve Health Outcomes
  • Patient Data Sovereignty: The Hidden Crisis Nobody Is Talking About
  • Agentic AI Workflows Create Shared Liability for clinicians and CEOs

Full Transcript

Hundreds of thousands of people were dying in Pennsylvania and getting cancers. The truth is is politicians

cancers. The truth is is politicians were being paid off because that's how they got funded with their campaigns.

The big oil is totally okay paying off the fines. AI is truly the new

the fines. AI is truly the new electricity. People think that it's

electricity. People think that it's literally just a cloud and it's not an actual place where you need to store everything. You do need a lot of water

everything. You do need a lot of water and you need a lot of energy to maintain those things. A big health concern is

those things. A big health concern is actually for the actual infrastructure.

Sanjay has the most beautiful halo around him right now from the golden hour. You are literally lighting up.

hour. You are literally lighting up.

Welcome to AI and Healthcare. My name is Dr. Sanjay Janeia. I'm a hematologist medical oncologist, also known as the ENDOC on social and news media. And I'm

joined by my co-host Dr. Douglas Flora who is a man of many things in addition to executive director of St. Elizabeth.

He is the incoming AC president. He's

the chief editor of AI and precision ecology. And today, Doug, I am very

ecology. And today, Doug, I am very excited because we have somebody that arguably wears more hats than you do.

Akifa Kate is just exceptional. She is a healthcare attorney and that's really where her foundation lies. She actually

taught the people and wrote the chapters about AI and all of the legal stuff that's related to AI in healthcare. But

in addition, she's a biotech expert. Uh

she got her education at Cornell and John's Hopkins with her Master of Health and Administration and her biotech MS. And what makes her exceptional isn't just her credentials. It's also that she's turned one of the most painful and

dismissed patient experiences into a mission that's a company that hopefully we have time to get to today because at first we really need to get into her brain when it comes to all of the governance and regulatory and compliance stuff as it relates to these tools that

are just exploding.

Yeah, Sanjay, this is going to be a fun one. I we've been looking forward to

one. I we've been looking forward to this for some time. Obviously, she is a brilliant woman, but she's also brilliant in ways that you and I are not. And I I like having new voices on

not. And I I like having new voices on this pod. I love having people with a

this pod. I love having people with a completely different perspective. And

we'll get into a little bit of her background, how she found herself in this uh place now where she's uh a leading voice and uh and a lot of topics will be very very interesting to our audience. So, welcome and we're very

audience. So, welcome and we're very very glad to have you tonight.

Thank you guys. I am humbled by that introduction and I really appreciate your guys' time and excited to let you know what's going on in the landscape.

Yeah. So, we met at health and you know at the time and I again I almost wonder if it's his own episode. You were

telling me about this like really kind of terrible experience you had with endometriosis and repeat surgeries and potential cancer lesions and you have an entire radiomic you know company when it

helps kind of identify these lesions to maximize operations. So that's my teaser

maximize operations. So that's my teaser because we're not going to get into that at all right now. What we're going to get into is how you were actually just at Hims and Hers and you came, you know, in our pre-call talking about all kinds

of interesting things as it relates to the legal aspects of of healthcare and AI. So, you know, if you could do us the

AI. So, you know, if you could do us the favor, tell me some of those things that excited you the most about what you learned and kind of what is under debate when it comes to AI and healthcare.

Yeah, absolutely. Well, there's so much.

It is a very fastm moving you know landscape as we were just discussing and I think I want to start with the bare

bones of AI in general because I think where people are not realizing that AI is truly the new electricity not even

Google it is literally as if we've invented electricity all over you know so it's it's going to change every facet in every part of our life and there's

going to be a before and after. So the

first thing that we can start off is, you know, let's start from the top down just AI in general. You have uh different buckets. And so I'm going to

different buckets. And so I'm going to kind of go from bucket to bucket and then we'll hit health care is one of those buckets. And then you can see how

those buckets. And then you can see how drastic of a change and what's coming for everybody essentially whether we like it or not. And so for AI you have

power, infrastructure, geopolitics and economics. So, within power, um, you'll

economics. So, within power, um, you'll have corporations, government, military, intelligence, capital, and then switch on over. I'm I'm just going to hit all

on over. I'm I'm just going to hit all the topics. Um, second is we have

the topics. Um, second is we have infrastructure. That's when you have

infrastructure. That's when you have your chips, your GPUs, your data centers, um your energy, making sure that there's enough water, you know, for

all of these different things, the rare materials that are required, and also the um networks that are required as well to keep everything afloat because for some reason people think that it's

literally just a cloud and it's not an actual place where you need to store everything and everything that comes along with it. So um one big thing with

data centers for example before I hit the other two topics is that it's ex very expensive to maintain and you do need a lot of water and you need a lot

of energy to maintain those things. As

for the for the data centers, I've actually had a case in the past where I was exposed to a grand jury investigation and this has nothing to do with data centers. It was actually

something that happened in Pennsylvania with fracking. And fracking is, you

with fracking. And fracking is, you know, you go into the ground to go get the gas. What happened is, uh, all this

the gas. What happened is, uh, all this natural gas was essentially going into people's water and their water supplies, it actually was causing a lot of cancers and a lot a lot of big issues to the

surrounding neighbors. So, what

surrounding neighbors. So, what happened? Uh, a lot of people were paid

happened? Uh, a lot of people were paid off essentially. And this is I'm just

off essentially. And this is I'm just trying to like conceptualize to let you guys know what we're dealing with here with the reality of it.

No, that's so interesting because when we talk about all of the things that you know are available when it comes to uh at least AI and healthcare, we seldomly talk about the liabilities that have to

do from this legal aspect. Like I don't know, Doug that people are asking the questions where are you getting your power sources? What are your means of

power sources? What are your means of being able to power and compute all of these things? And to hear that you could

these things? And to hear that you could actually have a liability from just the way that a potentially a platform or a point solution is able to supply the insights that it has is a whole another

kind of liability that I didn't even think about especially when you talk about the depend the dependence or the dependability that occurs with AI tools.

Yeah, absolutely. So this is just one area. So we have the power, we have

area. So we have the power, we have infrastructure, we have geopolitics which is everything that's happening in the world which we won't we won't discuss that. That's a whole country

discuss that. That's a whole country versus country thing. And then we have the economics. So the economics is labor

the economics. So the economics is labor labor um shifts in labor productivity cost of AI and also making sure that

there isn't antitrust issues in the market. So in terms of when we hit that

market. So in terms of when we hit that one aspect of energy and data centers, a big health concern is actually for the actual infrastructure. So neighbors,

actual infrastructure. So neighbors, what was happening in this case, right, is that neighbors were getting paid off by those large oil companies saying,

"Hey, we'll buy your property. You know,

we're know we know where you're we are a nuisance to you, but we'll either give you free water for life and or you know, we'll just buy you guys out." And so what was happening is it got so bad, it

was a grand jury investigation where hundreds of thousands of people were dying in Pennsylvania and getting cancers. And you had lists and lists and

cancers. And you had lists and lists and lists of people and nobody really cared because the truth is is when I was sitting in that room, politicians were being paid off because that's how they

got funded with their campaigns. The big

oil is totally okay paying off the fines because they had that much money into it. And the EPA was kind of turning, you

it. And the EPA was kind of turning, you know, a a different eye and making different regulatory things to make it a little easier because of the lobbying that was happening. And so a lot of

these Pennians were suffering. And to

make it even worse, they even did dumping essentially. And so instead of

dumping essentially. And so instead of getting rid of their waste properly, they were throwing it back into the ground, which was like football fields of this. And it was literally affecting

of this. And it was literally affecting these people. And so, you know, luckily

these people. And so, you know, luckily the office of attorney general, this was a long time ago, you know, saw it, combed it, and also helped the people in Pennsylvania. Now shift that to data

Pennsylvania. Now shift that to data centers because data centers have tons of environmental aspects that are all health concerns for the constituents that are there. Not to mention the

actual financial burdens that will come with increased energy prices and things like that. So that's just the one little

like that. So that's just the one little section for infrastructure in this grand scheme of things. and I did relate it to health care but um in terms of health

the health care infrastructure and if we're going to you know kind of go through each of the buckets before I hit all what what happened recently at um

the conferences I attended when you have governance you have tech convergence and then you have society so you have law

which you have regulation antitrust liability privacy you have copyright and patents. You also have deep fakes,

patents. You also have deep fakes, election and healthcare. So this is this is just governance and healthcare law specifically the regulatory.

You then have tech conversions and where does that play in in everything?

Robotics biotechnology uh cyber warfare, quantum, uh BCIs and you have different security risks and this also relates to all the different

tech that's happening in healthcare. So

BCI is brain computer interface technologies and so it's pretty much a chip. Neurolink and Synchron are doing

chip. Neurolink and Synchron are doing it to help parapolgic people. So you

have that that's coming out. Um and and then robotics to help all the different things that you see in hospitals or at home for elderly care, all all of that.

And then the third last little bucket is society and how it's going to affect different areas. So you have education,

different areas. So you have education, health family culture cognition and all all kinds of things in between.

We're not going to even hit all of the philosophical and ethical things. We're

just talking about what's really happening. and where it's going to all

happening. and where it's going to all go. So now you have these large

go. So now you have these large corporations that are in this AI race essentially to um advance their models,

but their regulatory has not caught up with it. And quite frankly, they don't

with it. And quite frankly, they don't want a bunch of regulatory to to slow them down. And so we have what's

them down. And so we have what's happening is the wild wild west essentially. And you'll see it in all

essentially. And you'll see it in all areas. We're not gonna even talk about

areas. We're not gonna even talk about those because that's like probably like 10 different podcasts within his own.

But we are going to talk about health care and you know I don't want to do a whole monologue. So that's kind of what

whole monologue. So that's kind of what the big basis is and we can get into the nitty-gritty too of everything healthcare.

I think that's an awesome way to set the table. So, you know, we're excited about

table. So, you know, we're excited about this and I I do think we need to get in the nitty-gritty and as we lean into healthcare, but this does matter um for us in healthcare regulation, too. As we

hear about changes at at the the federal level for software as a medical device and things like that, some of the reluctance to institute regulations on healthcare are based upon the federal

government's current stance on the arms race. You know, we got to beat China and

race. You know, we got to beat China and all the stuff that you hear out of David Sachs and the Office of Artificial Intelligence Crypto. So I think there's

Intelligence Crypto. So I think there's an ideology and and those of us that are in the healthcare world are fascinated by it because it almost seems like it's happening to us uh instead of with us and you're in the middle. You're you're

watching these things evolve. Maybe take

us back to him and and talk a little bit about some of these new updates that you were sharing with us in the pre-show about some things that our audience of inventors, founders, CEOs, uh industry types and doctors might find

interesting.

Yeah, absolutely. So one of the I mean they had just they rolled out announcement after announcement and there was a few conferences I went to also the HHS one and a few different

ones. So the the one biggest thing I

ones. So the the one biggest thing I think that clinicians and um you know would would care about is the

reimbursement one. So, Enthropic

reimbursement one. So, Enthropic popped out um their their version of like a HIPPA compliant uh healthc care

platform. So did Google Health as well

platform. So did Google Health as well as Chacht. So you can imagine now you

as Chacht. So you can imagine now you have a whole different playing field where patients will be able to put all

of their data into these, you know, platforms, tech platforms. And the exact coin phrase that was used is if the tech

platform can uh significantly affect positively affect the health outcome, the tech company gets reimbursed.

So no way.

Yes.

Interesting.

That is interesting. So, so now how is that going to um play out where hospitals and doctors are also a part of

that? Now, in in one aspect, are

that? Now, in in one aspect, are patients going to go straight to the to the tech and say, you know, this is what my symptoms are. Here are all my my labs

and then give me the outcome. Now, where

does that leave the physician and other people as well? um in the long term, but in in the interim time, it's great for

early adopters. Like I think that a lot

early adopters. Like I think that a lot of people um you know, it's it's going to be something that if you if you don't use it, you probably are not going to be able to keep up and and and all of the

health care systems are going to slowly adopt these kind of different versions of being able to utilize them. So that

was a huge huge you know announcement that they are trying to policy related to it.

Yeah. I mean like you know just trying to process that out loud you know we use technology all the time right we use like a CBC counter or CMP and we use MRI and CT and we use the computer and we

use Epic. None of those are directly

use Epic. None of those are directly like Medronic is not as as far as I know billing unless it's for an algorithm or something billing directly into into um anything related to a healthcare you

know compensation. How what is that

know compensation. How what is that generated with like a CPT code or do you know kind of the logistics of exactly what that billing system is? Is it the same one we exist in today or how does that work or is it just kind of like a

peruse you know month-to-month kind of billing? That is a good question that I

billing? That is a good question that I would like to know myself. I would like to know everything about this, but they just kind of gave a brief announcement

of this. So, how far are they and what

of this. So, how far are they and what are all the complexities and what how exactly are they going to deploy it?

I'll be honest, I'm probably first in line to try to learn more about it. They

didn't give too much. But what I can think of is that it would probably, you know, if if they're going to try to

tokenize like using these platforms. Yeah. If they're going to do that, then

Yeah. If they're going to do that, then I would I would think that it would fit in line with some kind of version of that. Um maybe hospitals use it first

that. Um maybe hospitals use it first and you know, that way they're kind of the segue and they have a bigger budget.

I mean I'm this is all also business planning and commercialization trying to figure out how it would fit in. So I

don't know if they're going to leave it to consumers then you know that also puts a huge huge um risk on many other

things. National security is a big one

things. National security is a big one and having all of your data into one area and making sure that there's not antitrust issues there. Um and then who

owns that data at different points.

Right? So every time you change that data um you know primary secondary etc it all it it matters and who are those people that are doing that. So it right

now it's fragmented um there's a lot of interopability issues and adoption issues but we're going in this direction for sure. Sanjay, I was going to say

for sure. Sanjay, I was going to say there's some context here for when the EMRs came out and you you were probably both a little too young to be there, but we're hearing a lot of rumors, Akifa,

about um HHS and others building plans to build these tools in as an incentive for providers to accept the help. Um we we

had a 5-year um release of the electronic medical records where practices were encouraged incrementally.

we would be paid to play uh paid to implement an EMR building up a higher and higher portion of reimbursement for

your um cooperation and by year five if you weren't in there were fines and so I'm hearing a lot of rumors and you know we don't we don't get too much into rumors but I've heard them from three or four different places that a similar

approach is being considered right now and that might be hospitals that are willing to work with Anthropic or with OpenAI or with Google to use these tools to augment the care. If you're using

these tools, you may have access to an additional J-code or um you know some search charges that would um say add complexity to your RVUs uh because you're using um decision-m tools and

there are codes for those.

Yes, absolutely. um I can't um you know speak on behalf of what they're what they're doing, but if that hasn't crossed their path and hasn't been a part of the conversation, I would be

surprised. Yeah.

surprised. Yeah.

And I don't um you know, again, and I wouldn't be surprised if if that's kind of how it would roll out as well. So

yeah, it's pretty much a 2.0. I would

say now going back to what we just discussed before about data centers and and all these different um AI systems something that's also important to note like I'm just thinking as an attorney

here you want to have a separation of infrastructure providers and also your clinical AI deployers because then you're going to have a huge huge you know conflict there and antitrust

problem similar to what we see in the whole insurance sector right so you have um the PBMs and pretty much just the whole vertical. Yep.

Exactly. And that that that can be quite dangerous, right? And so you you want

dangerous, right? And so you you want the diversity. You you you know, if

the diversity. You you you know, if we're going to have to pay to play and and and be a part of this, we definitely want to have different large players to have options for sure. Absolutely. And

also offline versions of things as well.

So um the one thing I would say is you know god forbid something does happen right and we're solely reliant on on um everything being online or you can see

what's happening in the world today and there's huge huge risks of where these data centers are. So what's going to happen if something goes out and all of

the hospitals data is is gone with it.

So there has to be some version of an offline or a backup um in in some sorts that I'm sure they have thought of different you know ways to address that as well.

Akifa I find this so interesting because you know we have this concept of of uh dependence where basically um you know a physician may become dependent for

something to do something like calculate chemo for example. Fortunately, we don't get into situations where all of a sudden we have to manually hand calculate the chemo because when you think about software, software is like

you purchase it, you get it at at, you know, at your institution, whether it's for radiology or calculations and it just runs, right? It's not AI. It's

already like pre-built. It's pre-coded.

You just need electricity. So, one and one liability would be if the electricity went down, then sure, you're not able to like, you know, compute this. But but as I'm listening to you, I

this. But but as I'm listening to you, I think about all of those either point solutions or platforms that's like ambient listening and learning or potentially clinical decision support tools that will surface, you know,

absences in the chart that you need to be able to uh order to have proper workup or some of these other things that one could foresee replacing some of

the more manual and tedious processes when it comes to delivering, you know, specifically cancer care. Can you walk us through just a couple of the liabilities that we're not thinking

about if we are going to a vendor?

Because you had mentioned network. So I

don't know what question to ask a vendor to say what network do you use and how do I think about what the liability of quote unquote that network would be? Do

I ask them what where their data is held and what are those liabilities for the data? So these things that you mentioned

data? So these things that you mentioned and the power that goes behind it. uh

you know, you don't want to become dependent on something if there's 12 different ways for it to quote unquote kill you. And and I I I just never

kill you. And and I I I just never thought about that uh that element before of really having to consider these things again because a lot of the

stuff is being offered in kind currently because they just want proof of concept.

They want data. They want research and they say, "Hey, we're not even going to charge you for the, you know, the tokens that you're using." Imagine if somebody becomes depend dependent and then all of a sudden you have seven different ways that it could you know it could crumble

you know th those are the exact issues that people have to think about and u make informed decision-m and ultimately that I always go back to this knowledge

is power and so we have to have knowledge and education on how to go about these things and what are the options I think a lot of it should be

transparency right like similar to how we need transparency with services or different um drugs and things like that in pharma world. Likewise, I think that this should be if it's a service that's

going to be provided, there should be some transparency around all of that as well. Um and I would say that I think

well. Um and I would say that I think more of on the along the lines of currently there's not really that much infrastructure surrounding it and

they're building it, right? and they're

building it while it's going into um the private markets already, right? And so

this is ample opportunity for people to be able to have their voices heard, to be able to make comments. Like I would suggest go into the HHS, you know, website and list out all the comments

that you would like, you know, um that your concerns are so then as they're making all of these regulations that they would consider it. Um, and so

I think more on on that basis. Um, I'm

not really sure how it's going to work out after the fact when it's, you know, when these things are all in the market.

I will say though, there is um something I was thinking about heavily and to have almost like a blockchain ledger. So, I

think that that could be very important to help with preventing fraud and securing integrity. And what's what you

securing integrity. And what's what you said was really interesting because you said we have a lot of subscriptions, right? We don't really own anything if

right? We don't really own anything if you really think about it. Before you

used to have an old CD and you used to have that CD and it was your CD or your book, you know, your medical record. But

now everything is stored on places or is property that is yours, but is it truly yours? Right? Do you really have full

yours? Right? Do you really have full control over it? And so or do you have to be a part of a system in order to access all these things? So with that being said, I do think two big things

and this is more a proactive approach because there's not really that much regulation is that we should consider when we're making these

regulations and laws and policies with within companies and within states and within federally, how are we going to um

have that play out on the back end? And

I do think to prevent uh issues we always have to make sure the patient has control and has choices on what they are

giving the inputs to to this because once you put it the inputs matter and they're very very valuable. So you know the inputs um and then again having a

ledger to prevent fraud and you know all kinds of issues come about there. So I

don't know if that was a little helpful, but it's more of how to structure things.

So the next question I have and it seems to be all of the buzz right now is this whole concept of automation and espec especially agentic workflows. So I think a lot of the companies kind of leading

this AI race in in healthcare systems are able to take data harmonize and normalize it and set up agentic workflows to be able to execute really all kinds of tasks and insights whether

it's different kinds of you know care gaps or things that have to do with billing or even when it comes to scheduling beds and really optimizing treatment times for chemotherapy etc.

Could you tell us what maybe some of the legal liabilities or or concerns or things to at least be be aware of as it relates to agents and the data that

they're using? And are those two kind of

they're using? And are those two kind of separate bodies of of legal stuff or are they kind of lumped together? How does

that work?

Um, they are two separate uh things. And

yes, so right now there's a there should be a distinguish uh distinguished regulations between static AI tools and

then agentic and advanced models as well. And so I think there should be a

well. And so I think there should be a heightened, you know, um, a heightened requirement when you're dealing with agents,

especially if they are going to do things in a uh, critical care situation and also whether it's related to finance and reimbursement. I mean, you have

and reimbursement. I mean, you have these agents that are going to do things autonomously and, you know, without worry, which is fantastic. I actually uh

I'm a AI nerd. I actually think it's the most beautiful technology, but I think that with the correct um way that it's

deployed and who is in control of of how it's done, right? With great power comes great responsibility. And that's exactly

great responsibility. And that's exactly what I think this is. So for agents I think there should be a heightened oversight with um when there's self-modification

uh autonomous goal setting independent execution and resource acquisition. So

that needs to have some kind of human control, especially in a health care setting and especially when um you're dealing with high-risk situations and

and just because you don't want a situation where now you have the intelligence that made these deals or made these decisions that cannot be

taken back. And I say that I think about

taken back. And I say that I think about this similarly to BCI technology a little bit because it's a fantastic thing, but it's also quite dangerous as from a from a training perspective. And

I actually wrote an article about this a while ago. So BCI technology, you can

while ago. So BCI technology, you can have external and you can have internal.

They're pretty much like chips and they help people that are parapolgic. They

can actually just with their eyes use a whole computer and explain their thoughts and everything. So you have someone that has um minimum like cognitive ability, right, or vegetative

state and they'll be able to participate in their care or say what they want for end of life decisions and things like that, right? Which is fantastic that

that, right? Which is fantastic that you're giving somebody this kind of option or be able to, you know, move their arm or something like that. But

from a regulatory standpoint in a lawyer side, I mean, how much can you trust it in the sense of are we going to allow this in a court of law? Are we going to

say that this is informed decision-m and it's approved? Are we going to say that

it's approved? Are we going to say that um they are competent enough to be able to make these decisions? And so

similarly, you have agents that are doing all of this. and

um can do it better than humans. It's

just a reality and we've gotten there.

It's not going to happen. We gotten

there. And so now we have to think about how are we going to work with it as opposed to just allow it to be completely um free for uh permanent and

life life choices and life decision-m especially in healthcare. Doug, you

know, I've mentioned before I get excited about just one use case of AI where, you know, the complexity of billing is so challenging because sometimes you're having something that

meets the standards for a complexity of a visit. So, it's a level service five,

a visit. So, it's a level service five, right? But but you're just billing it

right? But but you're just billing it improperly. So, it's like a level three.

improperly. So, it's like a level three.

And we always have these institutions saying, "Look, Sanjay, you said this in your HPI. How come you didn't build it

your HPI. How come you didn't build it as this, you know, diagnosis code, etc.?" So, I'm like, "Oh, great.

etc.?" So, I'm like, "Oh, great.

Something that's gonna actually mirror the complexity that really is." But I never thought of it, you know, in the capacity of imagine if it was actually accidentally elevating the complexity of

this billing. And now you've been

this billing. And now you've been billing for months potentially even to Medicare or to federal dollars on something that turns out to be fraudulent but was not intentional. You

know, unlike some of the cases you hear where people are like really fraudulently billing. So that's just one

fraudulently billing. So that's just one use case of of kind of you know an agentic mishap that I just wonder whose responsibility does that fall on. Is it

the vendor or the technology company that you use that's supposed to interrogate and ensure the validity of the service that they're providing you?

But if you're the institution that elected to adopt it and you're the physician that chose to take the institutional adoption, you know, does it fall on you? And these are questions that that are wildly concerning and I'm

going to be honest or blunting my my enthusiasm to get every kind of AI device in my practice.

Well, wait, Sanjay, first of all, for those of you that are listening on your car radio or something, Sanjay has the most beautiful halo uh around him right now from the golden hour. So, so his enthusiasm is actually visible right now

because uh the sun is hitting him just in the perfect way, but you can't see yourselves on you, but it's it's it's pretty funny as you're getting enthusiastic. You're literally lighting

enthusiastic. You're literally lighting up. I do think that this is one thing

up. I do think that this is one thing for our listeners. I just wanna I want to sit with that for a minute. These are

the tools that we're talking about every day in our normal lives. Sanjay and I as we talk all over the country as we're writing about this stuff. I don't think we're going to have to worry about things like rev cycle management or

business intelligence or prior authorization or if it's a level three or level four. These are so automatable two years ago that I think when providers really start to get their feet

wet and and we always say get in the sandwalks. I guess that's different than

sandwalks. I guess that's different than feet wet. They're just going to feel

feet wet. They're just going to feel relief because what I want to do is emote with my patient. I want to stare at the family. I want to be gazing at my patient and seeing how she's doing. I

don't want to be thinking about the level of complexity of my discussion.

And we had lectures today this week at my hospital. We brought all our doctors

my hospital. We brought all our doctors together in for just another meeting with our with our billing team to say you have to hit this code and this code and this code and this code for this level of complexity for HCC scores

because you're literally you're actually doing the work but you're underbuilding the complexity because the only box you hit for this visit was diarrhea and diarrhea is not a level five. You didn't

document that you did eight other important things including end of life care. Um, so I'm so excited for when

care. Um, so I'm so excited for when these agentic tools are there in manifest and they just take that off our plate because that's not what doctors are trained to do. It's not what we'd like to do. I feel weird billing

anybody. I probably bill lower than

anybody. I probably bill lower than anybody in my practice because I'm embarrassed when my patient gets a bill because they don't understand that if it says that Flora charged $875 that Flora is actually probably going to

get like $21. They don't understand the the the marksmanship that is uh that is insurance coverage these days. But I

don't want my patient to open that envelope and be like, "Oh my gosh, I maybe I shouldn't go see Doug because it's $800, you know, and he's a busy man. I don't want that." So, I'm super

man. I don't want that." So, I'm super excited about agents that you're talking about taking that off our plate all together and saying this is the work that was actually done. Our agent talks to the insurance company's agent done

and dusted. And you know, I do think

and dusted. And you know, I do think that, like I said, this is an amazing amazing time because it truly is like a revolution, right? An AI revolution in

revolution, right? An AI revolution in regards to how we're practicing everything. And so I think that when

everything. And so I think that when you're talking about agents specifically, it's a great great tool, but we just need to make sure that we have the correct um regulatory and

infrastructure to help support that. So,

for example, having a ledger would be a really good one, like a blockchain kind of ledger to make sure that it's doing what it's supposed to do. Likewise, um

the efficiency, the operational um streamlining is perfect. And that's you you kind of nailed it. And that's the whole point, right? We're not putting all of these things to make life more

difficult ideally and also to to displace a lot of things. We're actually

trying to streamline it, make sure that it's easy use. And so I think with that has to be a little bit of both of you know making sure the regulatory side

comes in so that way the deployment we need feedback right you need you need feedback too you need people that are willing to try it out and you need feedback to make sure that this is the

way that we want to go with that and you know it's kind of interesting um what you're what you're talking about a little bit too because I remember this one case I had that was with the DOJ and

it was a Medicare Medicaid fraud I think it was like 52 million and they had an um like an automatic billing uh master

billing thing, right? Prepopulated all

of these codes beforehand, which obviously is very blatantly fraud, right? And so and then it it Yeah. And

right? And so and then it it Yeah. And

it was with elderly people. So something

like that obviously no go, right? But

you have something with agents where it can take it off your plate and it is, you know, has the proper authentication.

It's widely used, properly standardized.

I mean, I do think that these things are really, really beautiful and amazing tools. So, you know, it's just a matter

tools. So, you know, it's just a matter of use and policy behind it and how it's going to be implemented. Do you think as I'm hearing you speak, you know, I've never thought of this before, but do you

think there's a world where there's almost a set amount of tolerance for potentially I don't want to call it drift because drift is usually permanent, like you're going away, but I mean like where you're just like, look,

5% of the time this agent is going to get it wrong when it comes to the billing stuff and we're just going to be okay with that because the 95% otherwise is actually appropriate. Is that

something under consideration or is it is it going to take a perfection where you can hold a physician or an institution 100% responsible 100% of the time? Is it is that the same thing that

time? Is it is that the same thing that we're looking at currently with AI and in a gentic workflows or do you think there's going to be a tolerance effect that maybe wasn't really privy to humans?

Um you have biases with AI in general.

So you have different, you know, um, hallucinogens, all these different things, right, that you have to account for. But I would say in some ways it's

for. But I would say in some ways it's it's it is it can be better if you think about all the different Medicare and Medicaid fraud cases happening now.

I mean, it might just take out some work from some lawyers, you know, so it might be a good thing in some ways. So, um, I do think again like it's a it's a it's a

great tool, but it's just a matter of how we're going to deploy and implement it. Now, here's here's the other thing,

it. Now, here's here's the other thing, and it's just another layer uh in all of this, and and I'm again looking at it from an attorney's perspective, and again, I know that you're a clinician.

You have a completely different perspective when you're using it. The

patient has a different one. And even

the hospital CEOs have different priorities because they have to run their hospitals. And you know, it is a

their hospitals. And you know, it is a they they operate like a business. And

so, this is the tricky part. I actually

do think that it should not necessarily be on the clinician's you know completely the clinician's um uh what's it called um

responsibility or oh yeah it's not completely their responsibility it should be a shared responsibility obviously but I do think it will come

down also to CEOs I really do I do I I think it you know regulatory and CEOs because they're the ones in charge of managing ing all of it and they obviously have to look at state and

federal laws. And so that's their

federal laws. And so that's their responsibility with the government, but also when they're implementing, you know, their valuebased care or what tech to invest in or how are they going to do

their billing, they should be equipped and these guys should be at the forefront and how these things are implemented. And the thing is they are

implemented. And the thing is they are balancing profit and care. And you know and I'm sure you guys are all familiar with value based care versus fee for

service. Now even in that world for

service. Now even in that world for value based care and having more positive outcomes, right? That's the

whole point. We want to have uh access and positive outcomes that I've actually seen this happen. It can be manipulated

through the policy of the hospitals to where the physicians are required to you know follow policy that isn't in the best interest of the patient but rather

the best interest of the business side of the hospital and so those kind of things we have to make sure that CEOs are responsible to maintain that

integrity and for their clinicians for their staff for the patients and I think that they should have actually a heightened, you know, penalty if if they're doing something that's

inappropriate. That's just my humble

inappropriate. That's just my humble opinion on that.

I have to ask, you know, I don't want to say that people are necessarily waiting for a big legal thing when it comes to AI and healthcare, but I do get this feeling that many are just kind of

waiting on this big quote unquote mess up. Number one, is there one that you

up. Number one, is there one that you consider big that has happened that you can share with us? Or two, if it hasn't happened yet, why hasn't it? And what

are some of those perhaps those measures? Is it is it, you know, being

measures? Is it is it, you know, being very conservative and safe that has kept that from happening? And then my adjunct to those questions is what do you think

potentially is a high risk for being the first example of that? And how far away are how far away are we from potentially a big blunder? Like in what capacity

would that exist? I don't think honestly there are a lot of people that really care about all of it you know and they really are everyone is doing the

best that they can to roll out really great innovations great tech help save lives and so they are always going to be issues and I think that's where you have

the remedies of enforcement right um and I do think there areas that are fragmented like so data ownership is a fragmented. I think that

fragmented. I think that interoperability is a big issue as well.

So if we can kind of figure out um you know uh those areas and kind of clean them up, I think they'd be more effortly

seamlessly integrated. And also I think

seamlessly integrated. And also I think that um you also are having issues now I guess in Silicon Valley and I can advocate you know for this being a

startup founder myself that how is that going to play with all these large players right so are you going to fall on a infrastructure or feature side how

exactly is this going to work out because and then who owns what and how how are we going to integrate this into the care if you have giant players like

Enthropic and Chachi BT. So they they do have um incentives to work with them and so those are things that I would say um it's just going to change the the

landscape. I do actually think that

landscape. I do actually think that Singapore has a really good um healthcare and national AI framework that people could probably fall on if you know if they want to see something

that has you know been enacted as and and the US is actually doing a a good job. I think that they are um you know

job. I think that they are um you know we're one of the most fundamental nations that are pushing all of this and the first ones to innovate. So you know

with that comes um some testing issues.

So let's just see where it falls. And I

think that I have high hopes for the American system that we're we're able to navigate through it and get on a better side of everything.

I have to give our buddy uh Harvey Castro a shout out, Doug. He's a friend of ours and he's the advisor to Singapore and has been for for quite some time when it comes to health AI.

I want to get to the place that everybody understands the majority of people working in these fields are trying their guts out. The

clinicians desperately want help. The

nurses need ours back. The lawyers are trying to defend us from ourselves and protect us from stepping in it when we don't know we're about to step in it.

And I tell you, Sanjay and I work with dozens and dozens of these companies.

The builders believe in their tools.

They really, really do. They're not

there just for a big exit. They're not

there just to say they have a company.

Most of these people have lost somebody to cancer. Almost all of them have a

to cancer. Almost all of them have a mission. And as you work with them, you

mission. And as you work with them, you start to get to know these guys. You

recognize that they get thrown under this um sort of umbrella of vendors, which is a, you know, it's kind of a bad word in healthcare because everybody thinks you're just trying to take a piece of the pie. Um, this pie is

efficiency. This pie is better quality

efficiency. This pie is better quality care at lower cost. The PI is giving the burntout providers and nurses extra hours back. And we need to do that with

hours back. And we need to do that with smart lawyers, smart designers, smart engineers, smart CEOs, and doctors who are willing to understand that just because you're approached about a

product doesn't mean there's some evil overlord on the other end just counting dollars, you know, like McGomery Burns.

It's a genuinely decent person trying to solve the same problem we're complaining about all day long. I think one thing I thought was really cool that I learned um that it, you know, um was a really

cool tech and I just like blew my mind that that was actually a real thing that you know they were rolling out is that there's tissue that you can put on a

chip essentially and it will tell you what the gene mutations are. So for

cancer um it can you know I I don't know all of the details of it of exactly what they were explaining but essentially you can put tissue on a chip and then you would figure out which mutations were

there and also even what care you would need personalized because of that which again affects cancer. It can affect indometriosis.

It can do all types of stuff and I mean they even do some like cool stuff with it. They said we actually put brain

it. They said we actually put brain tissue on there and repotted how to play video games, which is like the most insane thing possible. But here here you have instead of having a 10-year

diagnostic timeline, especially with indometriosis or uh you know with women's care, it's long overlooked and you have to go through multiple biopsies, labs come back normal. Now we

have innovation that can just right away help you out and you know prevent those things which was never possible 10 years ago. So that is amazing to save lives

ago. So that is amazing to save lives and that's ultimately what we're here in healthcare is to save lives. So that

should always be at the forefront of everything.

So Aka you just teased us to you know this entire different world that we didn't get to talk about when it comes to your uh pursuits in in the in the medical solutioning. So we'll have you

medical solutioning. So we'll have you back for that Doug that was beautifully said about about where people's hearts seem to lie when they're coming with these technologies. is that it's not

these technologies. is that it's not this big like you know at least in healthcare this big like honestly you could probably be more successful somewhere outside of healthcare with the amount of regulation and liability there is and there are good people and I think

it takes teamwork and Akifa we really just appreciate you being here and telling us about you know making us aware about where our pursuits passions and missions lie and how to do it you

know in the safest way possible.

Thank you guys.

Loading...

Loading video analysis...