LongCut logo

Robotic Arm Seminar

By Yike Wang

Summary

Topics Covered

  • Cute Robots Kill Without Safety Layers
  • Triple Safety Beats Single Failsafe
  • Software Stops Customize Reactions
  • Safeguard Stop Enables Quick Recovery
  • Incremental Poses Avoid Singularities

Full Transcript

Morning. Good afternoon everyone. Um my

name is Shi. I'm a member of the Kathy Nightingale lab and uh today we're going to be talking about safety and how to control this new robot that you guys have bought. Um this is a 70. It's a bit

have bought. Um this is a 70. It's a bit smaller than our tenny up in the hospital. Um and so there are some

hospital. Um and so there are some considerations you have to make regarding safety and especially in clinical settings that are really important. So a brief agenda. We're just

important. So a brief agenda. We're just

going to go over the basic components of a a universal robot or a UR uh E series.

We're going to go over some safety considerations to make for the robot.

We're going to go over how to control the robot from a laptop or remote control and then also how the nil >> for unpackaging the robot. Once you guys got or you received the robot in the

mail, what you had was the robot itself.

We all know that you have your control box which is down here. Uh it has an IO panel for um inputting or inputting signals and outputting signals. And

finally, you have the teach pendant. So

all three of these things do have separate functions, right? The robot is what you actually control and what interacts with the system that you're working with. So in the clinic, this

working with. So in the clinic, this might be a patient or a human subject and or your experimental setup. The

control box is uh good for mediating safety, good for connecting to your remote system that you're going to use to control the robot, and then also good for triggering in and out. Um and I say that because it's important for us at

least using the sonics. This is what we use to send triggers. Helps the robot to tell us to do different things. So

perhaps the sonics and finally the teach pendant is there to kind of control the robot from a uh from a closer perspective. It has a lot of buttons and

perspective. It has a lot of buttons and manual control from the teach pendant.

We don't necessarily use the teach pendant much when we're using remote control mode, but whenever you're in manual control or you want to move the robot uh using the buttons provided by

the UR software, you would access the teach button for that.

Um, so first I'm going to talk about safety. Um, we all know that this robot

safety. Um, we all know that this robot looks very small and very cute. Um, but

truth is it could kill you. I'm just

kidding. There are robots out there that do ninja-like activities, uh, martial arts. Our robot does not do that, but it

arts. Our robot does not do that, but it can still be very dangerous if not uh accounted for and not controlling for safety. Um, this is an example video

safety. Um, this is an example video that I found online where the robot is moving in an automated movement quite quickly. It has a a high linear

quickly. It has a a high linear acceleration and it kind of collides into the sky. Um, he seems fine, but being uh being hit by the robot at a

high rate of acceleration will not feel good. Um, so it's very important to

good. Um, so it's very important to especially when you have the robot in an automated fashion. Make sure that it is

automated fashion. Make sure that it is monitoring the pose, its position, and its speed at all times and putting appropriate limits on those. Um, so

yeah, specifically for collaborative robots and for you guys and us in a clinical setting is really important to consider. So I'm going to go over three

consider. So I'm going to go over three different layers of safety that we implement. And the reason that I say all

implement. And the reason that I say all this um that I'm saying it in level one, level two, level three is this is this is how we report it in our IRBs. Um,

specifically because you're introducing a collaborative robot, the We will want to know that you're using collaborative robot in your studies. So this is how we report our safety. Level one is hardware limits and these are limits that we

configure on our teach pendant that the robot is this is the base level of how fast the robot can go, how much force it can take um and positional limits. So

these can be things like joint position, joint positions, actual position and force limits on the robot. And then also safety planes which are general areas that the robot's tool like the end of

the robot or even the elbow of the robot can't reach. And so all of these are

can't reach. And so all of these are coded on like as the base level you can put this on the teach pendant. So the

teach pendant um helps you configure the basic robot uh limits. And so the main thing with this is that the robot will engage in a reduced mode where it just moves more slowly and offers more

resistance if it breaches any of these or if it violates any of these safety limits. Um, and this is our safety

limits. Um, and this is our safety settings. I'll send you guys these

settings. I'll send you guys these slides if you want to refer back to this when configuring your limits as well.

Level two is software limits. Um, and

for all intents and purposes, software limits should be for the most part redundant because most of your safety concerns should be addressed in the hardware limits. So hardware limits are

hardware limits. So hardware limits are like the base level. Software limits are what we're monitoring um when interacting with the robot. And so we have additional things like if the robot detects larger force than 20 newtons or

if it's moving faster than 25 centimeters a second or something, we stop communicating with the robot. And

that's just on our laptop side.

Um also joint positioning limits and things like that. And so this is mostly facilitated by the real- time data exchange module on the robot. And the

real time data exchange is just continuously pulling the robot for its force and speed and position. And then

we have additional safety concerns placed on our robot on our like laptop code basically.

Um so the key advantage to software limits are that with the hardware limits if it violates a hardware limit it just starts slowing down. It has a very set behavior for if it violates a hardware

limit. But with a software limit, you

limit. But with a software limit, you can kind of choose what happens if a safety vi or if a safety limit is violated. So for us, if we have a safety

violated. So for us, if we have a safety limit, it prints out like a, hey, you're, you know, you're putting too much force or it's moving too fast. I'm

going to start slowing down. That's

something that we imposed. Like you can kind of, you can control what happens if a safety violation is breached and then kind of how urgent that is to mitigate.

Um, and so yeah, as I mentioned, software limits should be redundant because hardware limits should be like the base level of safety. Does anyone

have any questions about that?

Um, and then the last level is manual or emergency stops. So, say your hardware

emergency stops. So, say your hardware limit is, I don't know, something happened to the hardware limit, it didn't actually catch, and then your software limit also didn't catch it. Or

just in general, if you're in the clinic and the patient is getting scared or something, that's something that your hardware limits or your software limits are not going to pick up on. That's just

like a personal level of safety. So, we

have additional buttons. So, we have an emergency stop button on the teach pendant. this big red button here that

pendant. this big red button here that you can press that will engage the solenoid brakes and kind of back drive the robot and basically shut it down. It

stops all movement and the robot stops.

Um, but the downside of this is that it also turns the robot off. So, if the patient is, you know, let's say the patient is being pinned or something and they say, "Oh my god, please stop the robot." You

stop the robot and it shuts down. It

takes a little bit more time to start the robot back up and remove the patient from that scenario. In that case, when you're trying to just stop the robot movement but not shut it down, you can use a safeguard stop button. There's

different levels of safety stops. So, we

have put a safeguard stop button on the control box that um Sorry. Okay. I was

about to say emergency stop button is on the teach pendant. We've added a safeguard stop that's that's connected to our control box. It looks like this.

So, we just give a button to the patient and when they feel uncomfortable, they press the safeguard stop button and the robot stops moving but it doesn't turn off. And so then we can undo the

off. And so then we can undo the safeguard stop and then we can move the robot out of the way.

>> Okay.

>> Um >> so once it's turned off, let's say the patient's being pinned down, you can't manually move the arm at all. It's all

locked up.

>> It's all locked up. Yeah. As long as the button is pressed. So as long if the ETOP button is pressed, then it locks and it turns off. But with the safeguard stop button, it's if the button is

pressed, the robot stops moving. But

what we do is basically that also um cuts any sort of like commands. So if

the robot was moving, the safe stop stop is if it was in the middle of a move and the safe car stop button was pressed, it stops so it doesn't continue to move after you release it. So then when you release it, we put it into free drive mode and then we put it back.

>> Okay.

>> Which is just a quicker process than the emergency stop button.

>> The free drive mode disabled when it's shut down.

>> Yes.

>> So that's good to know. Yeah,

>> it's basically specifically lock like we can't manually move it.

>> Right. Exactly.

>> And so this free drive mode that I talk about we use a lot because up in the clinic we're trying to manually position the robot before we start our rotations um for scanning.

>> Do we have that? Obviously we can't give our mice emergency stop you know.

>> But yeah you guys I mean >> it's not a splash devices.

And that's the thing it's like with automated scanning that's hard you know with the manual emergency stop is not really relevant. Um

really relevant. Um >> we do need that. That's what my point you know we because I think we want is the lock up. So go down and animal is being squashed. We want to actually you

being squashed. We want to actually you know stop the movement and actually pull it back >> right. And so what you could do is I

>> right. And so what you could do is I think you can say something like if the robot is in safeguard stop mode start moving back up. It's safeguard stop is also something that you can technically control from the remote side.

>> So if the robot in safeguard mode or something. Um, or actually from a

something. Um, or actually from a safeguard stop button perspective, you could say if the safe stop button is pressed, instead of having the robot handle turning everything off, you could

implement a protocol from the remote like using your laptop code that says, "Okay, if it's in safeguard mode, let's go ahead and do this. I I will pull back off." And that would also be an

off." And that would also be an automated movement.

>> Do we have that card?

>> Uh, yeah. I mean, I can send you guys the link for to buy. We just bought it on Amazon. Oh, you okay? It's just a

on Amazon. Oh, you okay? It's just a switch button >> that we So, basically, we connected our switch button to the control box. Like

there's a safety input and all this is like pin out the pin board.

>> So, that would just buy the box.

>> Yeah. You just buy this uh button and this box is what? Yeah.

>> Yeah.

>> It doesn't have a button now for >> Yeah. So, that's a button that you have

>> Yeah. So, that's a button that you have to add. It definitely has the emergency

to add. It definitely has the emergency stop.

>> So, the teeth doesn't have the emergency stop. that comes with that but then

stop. that comes with that but then additional safety features you have to add why they don't implement that as a safety card you know stop it sounds to

me the safe stop is more >> yeah it depends it's an application based it's like on a case by case basis the safe card stock can be more important because if you're not doing human like this is meant to be a

collaborative robot but a lot of people use it in industrial and manufacturing settings in which case you don't necessarily need it to be like you don't need to be quick about moving the robot

away if something happens. A lot of people will just so in manufacturing settings they'll just have like an emergency stop button hanging from the ceiling or something and just say, "Oh, someone go over there and press that button." Right? But that would also be

button." Right? But that would also be an emergency stop or an additional button to install. Um so yeah, the manual estop again should be like the third layer like hardware limit, software limits, and then emergency stop

should be like kind of your last line of defense. Um but this is really important

defense. Um but this is really important for human collaborative robot applications.

>> It's also important to You use this as a hydrophone scanning entry for example.

>> Yeah.

>> If it's starting to press against hydrophone, you want to stop and pull it back. You want to stop there.

back. You want to stop there.

>> Yeah. Um, yeah. Does anyone have any questions about this?

>> I just ask your own experience would going from safeguard stop to automatically transition into the manual drive >> drive. Yeah. Yeah. Yeah. Does that make

>> drive. Yeah. Yeah. Yeah. Does that make sense? Or

sense? Or >> yeah, that's what we do. So, we don't really have to use the safeguard stop button as much because we are already like already manually positioning the robot. But for automatic stuff like

robot. But for automatic stuff like automatic scanning on patients, that's where it would be super useful. So,

safeguard stop say the patient is feeling uncomfortable safe stop button.

You would not like you would go over to them, undo the safe guard stop button, and then we have a a button that says okay, enable drive. So then the safe guard stop button cuts all command

reading from the robot and then starts fresh. So when you say okay now we're

fresh. So when you say okay now we're going to free drive then you can free drive immediately and that I think that that makes sense. Um but also if you want to speed up the process you can say okay the safeguard was on and now it's

off let's go ahead and just implement some sort of like moving away protocol that's automatic also. It kind of depends on your operation. It's probably

like an exception.

>> Yes, exactly. Yeah. Um Yeah.

So again, I'm introducing these as three layers because that's how we set it up in our IRB. Um and our IRB liked that.

So if you guys want to use this, it worked for us. Um and so these are kind of the differences between the emergency stop and the safeguard stop. Um and this table is available also in your manual.

A lot of things about safety are available in the manual. So if you have any questions about that, the manual can answer. Um, so whenever you need to

answer. Um, so whenever you need to reset, so specifically they say re requires reinitialization. This is the

requires reinitialization. This is the big difference. Emergency stop requires

big difference. Emergency stop requires you to release the brakes of the robot again. So it engages brakes and then you

again. So it engages brakes and then you have to like go to the teach pendant.

You have to undo the brakes and sometimes when you undo the brakes, the robot kind of like jolts a little bit, which I just wouldn't recommend in the context of a patient watching this all go down while they're already feeling

uncomfortable. So, I would advocate more

uncomfortable. So, I would advocate more heavily for this stuff.

Okay. So, how do you actually control the robot? We're going to go through a

the robot? We're going to go through a demo of how to do this, but there are three main protocols that you need to use when uh addressing the robot. And

all of this is through socket connection protocol. Um there is the dashboard, the

protocol. Um there is the dashboard, the interpreter, and the real-time data exchange kind of modules. And each of these is an independent socket connection just through a different port

on the robot. But they have different functionalities. So the dashboard is for

functionalities. So the dashboard is for basic things like powering on, powering off the robots, starting a program on the robots, um stopping a program on the robot, getting the operational mode,

whether it's in local control or remote control, whatever. Um and also to like

control, whatever. Um and also to like release the brakes. Interpreter is the thing that we use the most in the NPL lab. It's kind of because we're doing

lab. It's kind of because we're doing very basic commands with the robot.

We're kind of just telling it to rotate.

Um once we're done with free drive, we need to enable free drive. enable free

drive. All of those commands can be transmitted serially. So the interpreter

transmitted serially. So the interpreter is good for inter the interpreter is good for receiving serial commands and just doing them one at a time. Um and

that also works really well if you're trying to use like a guey or something because each button is a command, right?

And then um that'll just execute the series. Finally, the real-time data

series. Finally, the real-time data exchange is what really helps with safety and also for force monitoring and position monitoring. This is constantly

position monitoring. This is constantly pulling the robot for attributes that you care about such as force position, safety status, um, and also like inputs

to your IO panel. So, if you're receiving inputs from something, this will this will tell you you'll be able to pull the voltage at each of the pins.

Um, and also you'll be able to send uh voltage, you'll be able to send commands to, you know, turn it into high in your analog digital panels. And your max sampling frequency for RTDE is 500 Hz. I

was wrong yesterday. It was not. It's

pretty high. Yeah. For Eeries, it's really, really high. So, um, we use kind of a combination of the interpreter and the real time data exchange. Interpreter

for sending it commands to do stuff and then a real time data exchange to monitor force.

>> 500 meaning what?

>> 500 Hz of pulling the robot for position.

>> Oh, so it's pretty fast. Yeah.

>> So, you get 500 time points basically per second like where the robot is.

>> Yeah. You can yeah you can stream the the commands >> is the real time data >> you can also tell it >> okay

>> that's pretty good if you want to stitch to that as long >> yeah it

is a limitation I'm there not the robot Um yeah. Okay. And controlling the robot

Um yeah. Okay. And controlling the robot especially for interpreter and real time data exchange. All of this is conveyed

data exchange. All of this is conveyed through the universal robot scripting language or UR script. And there are scripting manuals available online. This

is already heavily documented and we'll walk through how our code does that as well.

Okay. Um robot kinematics. I'm just

going to go briefly over what kinematics are for robotics applications. Um,

when you have a robot, each joint has its own rotation angle. If you are defining where you want your robot to be in base coordinate space, the robot needs to convert all of the joint angles

because what it's measuring is the the acceleration, the velocity, and the joint angle of each joint. And then it already knows how long each link is. So

it uses that to um conduct forward kinematics to transform all of your joint angles into a final pose in either base coordinate space or um yeah in base

coordinates.

So it's it uses some denov denovber parameters which are basically just like the parameters that you input into that forward kinematic calculation. Um if you are planning on doing robots for your

PhD I would advise taking some of the robot department robotics department's classes. They're actually really really

classes. They're actually really really useful. I audited one in my second year

useful. I audited one in my second year and it was really nice. Um, but for all intents and purposes, the robot will also conduct all these calculations for you as long as you give it the initial

pose and then the transformation matrix to get to the final pose. And then so if you say like I wanted to rotate my joint ankles, that's pretty easy. But then

also if you want to say I want to move from this pose to this pose, it's a little bit more complicated. But the

robot can do that for you as long as you know what angles you're doing. And all

of that is again through the UI scripting language. So this is a command

scripting language. So this is a command that I would give to the robot to get the pose or the joint angles given an existing pose. So my target TCP pose is

existing pose. So my target TCP pose is where this is located in base coordinates and then say I want to get my joint angles like the joint angle for every single six degrees or each of my

six joints. I can have the robot do that

six joints. I can have the robot do that for me.

>> Is there like a global reference point for all the joints?

>> Yeah. So that is defined by your initial configuration. You basically just have

configuration. You basically just have to tell it where the robot is mounted.

Um, and so what and I guess the only thing that you really need to consider is what degree or where gravity is.

>> I'm guessing like that that central post right there must be the global it doesn't it only spins, right?

>> Yes.

>> Okay. So all the other joints know where that is and then they they just, you know, >> they just derive, you know, the the

position of the I don't know. that

>> the yeah the yeah the tool I guess yeah it's basically just a combination of different transformation matrices >> right >> given the the new rotational wing of each of the joints um and then you get

to your final post um it's just a whole bunch of like rotation matrices put together um and then using these parameters as like your post rotation matrices >> six

>> yes yeah so there's six joints so this is the six degree of freedom robot Um and so yeah that just that just means six different rotation matrices.

>> So just kind of jumping ahead a bit in your experience is there much risk of sort of like

being position to position transform.

>> Uh yeah. Um, I think that that would mostly depend on I guess where your where your limits are located. I guess

the one thing that you need to be concerned about are singularities in which basically this only really happens if the robot is like contorted a lot or if it's kind of all the way outstretched

all the way outstretched. Um, and so if you are at a singularity and you give, so say like it's basically like if you're like legally locked up or something, if the robot is totally out of stretch and it's like kind of locked

in that configuration, it's hard to move from one pose to a pose that's really close by without kind of rotating all the joints at the same time and that can cause the robot to kind of malfunction.

Um, so singularities are really really important. Um, but other than that, I

important. Um, but other than that, I would say that there's no other pose that the robot should be able to complete. Um yeah, we can actually go

complete. Um yeah, we can actually go over some time to figure out what the actual um but yeah

>> but there's really no ambiguity in the algorithm in the sense that there's no such combinations that will actually lead to the same movement. So

so one position one position has one unique combination of all six joints there right?

No. So, so forward kinematics is like a function like when you give it joint angles and then say pose but inverse kinematics is always an undefined but

underet problem because if you have a pose multiple joint combinations >> is your question like you may pick something that is weird that may not So like we're expecting to translate this

way actually ends up like unfurling the arm.

>> Yes, >> that's a great point. Um

>> right >> for us so so that's a great point. Okay,

now that I understand your question better, so what we do normally is we translate a very very low in very small increments and for that when you tell the robot to move one millimeter 2 millimeters, it's able so it in

calculator inverse kinematics will kind of identify all the different combinations of doing that and choose the one with the lowest I think like lowest um I guess like residuals between

the joints. Um, and so along those

the joints. Um, and so along those lines, we never give the robot a long distance to traverse. If you were going to do that, it would be better to give it incremental positions in between

those. And so we use the interpreter to

those. And so we use the interpreter to send go 1 millm, go 5 millimeters. But

if you really want to do like a kind of a sweep or from one position to another, it would be better to identify the path like so create a path and discretize that path by x amount of steps and then

feed that into the RTV. So the real time data exchange a lot of people will kind of like have like a text file or you can have like a text file with the positions and say like okay these are positions created by what I want my path to be. I

want you to go from here to here in the swoop line but you would discretize it by some number of steps and then feed that in so that it would do that at very very small increments. So

that makes sense.

Um we yeah I would say that is how I would do it. We haven't actually done that yet because we haven't needed to move in an automated way necessarily, but it would be cool.

>> Yeah.

>> Um Okay. Next is there's also I wanted to just really quickly introduce the UR simulation tool. I'm putting this up

simulation tool. I'm putting this up there. I've never used it, but um I know

there. I've never used it, but um I know that people can use a UR simulator that's available online for free just in a virtual machine that's not Linux. um

or RoboDK, which is also partnered with Universal Robots, but I think you have to pay for it. But you can get a 30-day free trial, but I don't know how much you have to pay to pay. Um so, these two are available in case you guys want to test movements and positions. We don't

use it because we're not doing anything egregious with the robot, but if you do have like weird paths and like, you know, weird collision type environments that you want to test, uh you can use this.

Okay. how we use the robot. We use it for 3D rotational sway where we collect sway measurements at specific rotational increments with the robot. We like to say that we're using our robot as a very

fancy uh rotation stage. Um because we navigate it to where we want to image and we have it rotate. Um so in collecting multiple sway measurements as a function of rotational angle, we get sheer wave speeds as a function of

rotational angle. And specifically in

rotational angle. And specifically in anotropic materials, we see that she speeds change as a function of rotational angle. Uh namely that the

rotational angle. Uh namely that the major axis of this ellipse that we fit corresponds to the show speed along the direction of say skeletal muscle fibers and then there's a perpendicular direction as well and so we see that

when we kind of synthesize the sway data that we get from a total rotational acquisition we see a outwardly propagating ellipse um this is our protocol so first it

begins with the free drive movement we navigate the probe to the area on the leg right now that we want to be imaging once we do that we lock the transducer by ending free drive And then we have it rotate 180 degrees.

And again, this is a very simple command because it's just saying this last joint, I just want to rotate this 180°.

Um, and then finally, we end with user system movement again away from the the leg. So this is the guey that we use to

leg. So this is the guey that we use to conduct our rotational acquisitions. And

again, I said like with interpreter logic, a guey works really well because each button is just kind of like a command that you're sending uh that it can execute serially. Um, And we use Python code for this and we do utilize

the interpreter, real time data exchange and the dashboard to get all these things working in tandem. Um, it works on Linux, Windows, and we actually got it working on the Sequoia system too.

So, um, it can be done. Um,

>> on the Sean Sequoia.

>> Yeah.

>> Okay.

>> David got it working.

>> Now has to pay you, >> right? Exactly.

>> right? Exactly.

>> Okay.

>> Um, yeah. And so to aix the transducer to the robot, we use a printed probe holder. And so a template for this is

holder. And so a template for this is available on box, but we basically just take scans of our transducer and then put it into whatever this thing is this little cavity here. So we make a cavity for the transducer and the cord kind of

just comes out and it's attached to the very on the side.

>> The reason why it's so long is because you have to have room for the cord.

>> Yeah. So the cords don't have too much tension bends around.

>> For you guys, it's important to center the probe with the right.

>> Yes. Yes. That is something that we're we don't really have too much trouble with anymore. Like I can like manually

with anymore. Like I can like manually position it um to see that the joint rotation is not too bad. But um yeah, centered rotation because the probe holder sometimes can be kind of tilted

and the probe can also be tilted inside the probe holder. Um sometimes we get won rotation because there is an inherent additional transformation matrix that we have to uh kind of undo

uh to synthesize our rotational acquisitions in the volumetric like in a totally aligned volume not a volume >> if we hold like a raised array you're

talking about like 20 cm by 20 cm big >> sure >> so we have to somehow >> that's entirely possible >> but we have to have a pro holder I think >> yeah you just have to make sure that it has this tool But this is the only thing

that's important. You just have to make

that's important. You just have to make sure that this screw flange lines up.

>> Okay.

>> Um and the the dimensions for the spacing and stuff is available in the manual.

>> And that thing doesn't have any play.

Meaning like you put a probe there, it is locked.

>> If you just trying to uh wiggle the probe, it wouldn't move. It's fixed.

>> Oh, yes. Yes. Yes.

>> No matter how heavy it is, right?

>> Um it depend. Yeah. So no matter how heavy it is, it doesn't necessarily move as long as you're tightening these enough. I will say the weight of the

enough. I will say the weight of the probe is an issue depending on the robot that you're using. This has a maximum payload I think of seven and a half kilograms. >> Um, right.

>> Yeah.

>> Yeah. So, you have to make sure Yeah, you have to make sure that you're the load that you're putting on this is below that.

>> Okay. It's probably going to be >> But we have to do a good job with the the holder the the 3D printed holding to make sure there's no play there, >> right? I I'm sure the the robot itself

>> right? I I'm sure the the robot itself should be pretty, you know, sturdy, >> right? The robot is sturdy. It's melted

>> right? The robot is sturdy. It's melted

down. It should be fine. I also say we like to use the PLA fidgeters because they're a lot lighter.

>> Um, >> control the density.

>> You can control the density and stuff.

Yeah.

>> And also this thing is so long has a lot of like handing livery effects, right?

So I don't know 7.5G is matter like what just like the weight or they do have I don't know how long this cable will be.

You have this thing sticking out like a couple feet away.

>> So when you Yeah, it depends on So there is like additional torque exerted on the end of the sensor or the end of the robot depending on how far the sensor is and how it's located.

>> But that's the thing that you will configure once you put your attachment onto the or onto your robot. You'll say,

"Okay, the center of gravity of my uh of my attachment is this far away from the top. The actual payload is expected to

top. The actual payload is expected to be 14 kg or 14 uh And then this is where my bulb actually and they'll have you go through a calibration so that it can determine what

>> So it does the does that for you?

>> Yes.

>> Um does anyone have any questions up until this point?

>> Uh okay so this is our acquisition protocol. I made a little flowchart and

protocol. I made a little flowchart and I say that this is this is my part and joy because it helped me visualize our flow a lot better. I would advise that if you're going to do like complicated movements and stuff that a flowchart is

really nice. I use this flowchart to

really nice. I use this flowchart to treat the robot more like a state machine whenever we're modeling or whenever we're doing our acquisitions because it's mostly in idle free drive or like a homing configuration and then

whenever we get into our acquisition we do this you know uh like sequence of steps like we lock rotate collect sway data be done with data and then go back

into a free drive mode so that you person um and so I would advise making one of these before starting your acquisitions because it helped me a lot at least for

Um, and then so we're going to focus now on just the acquisition, how we do the acquisition. And this is important for

acquisition. And this is important for you guys as well because we use the robot to trigger the Verasonics for collecting data. Um, so what we care

collecting data. Um, so what we care about is we rotate the transducer at a fixed rotational speed and we want to collect data every 5 degrees. Um, we can

you can do this one of two ways. You can

assume that the robot is rotating at a constant speed and acquire at a time or you can pull the robot position and then send the trigger out whenever we get to each five degrees, right? Um, as of

right now, we're using we're we're pulling the robot via time. So, we just assume a continuous rotation. At every 5 degrees, the robot will send or at each, let's say 0.25 seconds, every fourth

second, the robot sends out a hardware trigger to the Verasonics or the Vantage box via the uh control box. So, we have like a pin to B and C cable that we plug

into our control box. Once the V uh once the Vantage receives this trigger and we've started the acquisition on the Vantage side, it's just waiting for the trigger to collect the acquisitions. And

so, the robot sends the trigger out to the Verasonics. The Verasonics receives

the Verasonics. The Verasonics receives that trigger and then um and then collects an acquisition. An additional

functionality that we wanted to implement was at some point in time we realized that the robot I was using sleep for I was using the sleep timer on the Python code to send out my triggers

which was a bad idea because there's an additional latency with the sleep time and so it was actually sending it out later than I thought it was. So now just to make sure I I changed that

functionality so it's not using the actual while loop but just to make sure that that we know when the Verasonics is receiving those triggers. I'm having the Veronics send me the time that it

receives the triggers back at the very end of my acquisitions. So we know what time where the robot was at that time since assuming a consultation.

>> Um >> are you recording that retrieval information somewhere?

>> Yes. So basically yeah either on the scope or so so not as in every time that we receive the trigger from VIX talks back to us it's over our rest API which

is like server client communication so that's more like that's not a trigger back um >> but you can do a trigger back like this can receive triggers and send triggers

>> um >> why don't you just get >> you can do that too precise >> yeah it'll be more precise Um, at this at the time I needed to figure out how

the Verasonics um would send the triggers out at specific times as well because what we our previous uh framework for this was having a trigger box >> send triggers to the Verasonics. So

we've never had the trigger we've never had the Verasonics for a specific time and trigger the robot. We you could do also that >> but trigger the the robot arm in their

application. They want the probe to move

application. They want the probe to move to a certain position >> when and then acquire data. If you use veron to shoot the arm, it's a little bit awkward because then you tell okay

arm you move but then only after the arm move to certain position you acquire data. Then Sonic still doesn't know if

data. Then Sonic still doesn't know if the arm has reached position.

>> So I think the other way is more natural. Okay, arm has moved has hit

natural. Okay, arm has moved has hit that angle position. Now you go because that position is very fast. I think it makes sense to go.

>> Yeah. Yeah, but for our like new armor case, I think still I think it's easier for us to be triggered by position uh

trigger because the the arm has a right location and then where scope that's position go like that, right? Because if

tell you tell the robot arm to go you still don't know when to start that position because you don't know when the arm has finished that movement unless the robot the robot arm send the

say it work.

>> Yeah, you would have just used the use the arm trigger.

>> I think for our applications it would still technically work if you can send the trigger and then say your robot start rotating and wait for second

and then start.

So for the rotation for the constant rotation it works but I think for your applications if you're trying to navigate a specific position and then

the robot to say okay I reached out okay is waiting for data when it's finished triggers.

Yeah. Yeah. Yeah. You can totally do that. Yeah.

that. Yeah.

>> So yeah, what we do is we assume that the robot again with the with the interpreter I just say rotate 180 degrees. So the robot starts rotating at

degrees. So the robot starts rotating at some speed. It like ramps up and it

some speed. It like ramps up and it starts rotating and so we just want to make it as quick as possible because also if we like start stop that motion will take longer. Um we also wouldn't want to do ramp up ramp down velocity.

So we do constant rotation but by all means don't >> we have position is like what 20 30 seconds

>> each the robotic arm barely moves you know for us we can't we have to stop physically step

>> but yeah the lock >> okay does anyone have any questions about this um so now I just want to talk through some tips and tricks that we've kind of picked up or some complications of robot

movement that in our positions. One

thing that we definitely noticed as well while rotating even within a single spade acquisition monitoring sher propagation sometimes we see this weird like vibrational motion in our acquisitions which shows up in our

space-time plot. So if we're looking at

space-time plot. So if we're looking at the sh propagate out this way we see that as the show is propagating there's some sort of like up and down motion throughout all space but it's not really fixed frequency in time either. It's

just kind of like something like this.

Um so we see it in our space plots. We

can filter it out obviously, but we do see some sort of vibrational motion. To

date, we've hypothesized that it's something to do with the stepper motor.

>> Maybe >> Hudson >> uh or Hudson, but we also tried I think I tried this with without the robot and with the robot and with the robot this shows it

consistently.

>> Okay. So, like the motor from the >> Yeah. So, as it's rotating, there's

>> Yeah. So, as it's rotating, there's probably rotating. Okay. Okay.

rotating. Okay. Okay.

>> So, this might show up as an issue for acquisitions. So, I would just keep that

acquisitions. So, I would just keep that in mind.

>> If you're stopping at each location, >> yeah, if you're stopping at the location, that's probably fine.

>> That's that's pretty fast though, right?

That that is like you're talking about several milliseconds.

>> Yeah. Yeah.

>> It's like a stepper, you know, steppers have magnetic coils.

>> Yeah, it's Yeah, it's still And again, what we're picking up is micron level displacement. So, it's like not a big

displacement. So, it's like not a big deal in the grand scheme of things, but if you're tracking micron level displacements, then you'll see your data.

Um, okay. Yeah. Secondly, we see rotational inaccuracy. This is probably

rotational inaccuracy. This is probably not going to be a huge deal for you guys, but whenever we rotate our transducer, like I said, depending on how the transducer is oriented or how the probe holder is oriented, we get

weird uh artifacts. So, seen here is like so these are rotational B mode positions. So, the B modes are changing

positions. So, the B modes are changing as the probe rotates. And so this is when you guys can see this, but this is when the transducer is perfectly centered, the central lateral position

remains, the speckle pattern remains the same. Um, but when it is misaligned, the

same. Um, but when it is misaligned, the central speckle pattern continues to change because it's kind of like either rotating like this or rotating if it's elevationally offset or something, it's kind of rotating in a circle while it's

doing this rather than actually rotating on axis.

>> Do you think it's a robot robot problem or the holder's problem?

>> I think it's the holder's problem. the

robot should be rotating about it central axis and we've found that also I mean if we do align it manually we don't see that issue um yeah we think it's

more so and again this is a difference of of a couple of millimeters of offset but it still has the ability to factor much um yeah and then we also see that

if we reynthesize those like ellipse sure propagating ellipses it looks correct when the proolder is unilated but when it's tilted there's kind of like this discontinuity between some of

the data that we're stitching together in a rotational setup. So, it's

important that it's aligned as much as possible so we can actually synthesize the volume in the most appropriate way.

Um, again, these are just I'm just sharing challenges that we've seen with our system. Um, they might or might not

our system. Um, they might or might not show up in our system.

>> You guys have that angular sample issue, right? The closer you're to to the

right? The closer you're to to the rotation center, the better the sample, spatial sample, right? The further away, the more spots. Do you guys with the do interpolation.

>> Yes.

>> To compensate for that.

>> Yeah. Whenever we make the visualizations, at least we use interpolation. Um but for the most part,

interpolation. Um but for the most part, what we're interested in characterizing is group speed as a function of rotational direction.

>> Okay.

>> So for the group speed issue, it doesn't necessarily matter. But I think and I

necessarily matter. But I think and I think whenever Ren is doing her phase velocity estimation, she's still only choosing >> the points that are along that ray.

>> Um >> for you guys, the closer the center actually, the more overlap you have.

>> Yes. Right. The more estimation you have for the same with your estimation really high >> and as you go further and further out you know it's it's deteriorating.

>> Right. Exactly. And you also have like geometric decay like radial decay as it spreads out.

>> So if we do a movement that is not rotation it's more like a translation or we also maybe do some kind of I don't know uh uh rocking of the probe you know as we move along the contour of the

rabbit you know body stuff like that. So

we have to somehow map out basically the probe location at each position >> and then we have to deal with not even uneven sample right throughout the process and figure out how to make them

have a uniform grid right size will not be uniform >> right right so you'll need to be constantly pulling the position and uh yeah where the robot is every time and I think also

>> yeah I think that that's a more difficult problem because if you're not translating evenly if you're trying translate um applying maybe a constant force also is what I was doing because you don't want to be compressing

anything between frame to frame. You

don't want your frames to fall.

>> Um right.

>> If you think about the limb of a rabbit >> Yeah.

>> you almost have to map out the contour of the limb.

>> Yeah.

>> And then somehow describe a path that follows the contour >> and then >> but that way you can get all the coordinates.

What you could also do is you could just like you could have like a like a big coupling path like a water path and just like this just do a yeah because >> I will say that that problem that you're

proposing is very very difficult so much so that our robotics people like in this robotics department are actually actively doing that >> um because because needing to apply because they do a lot of cobot stuff and

so they want the cobot to automatically scan like say the forearm or something that involves needing to monitor the force constant. ly making sure that

force constant. ly making sure that you're not overly compressing um and that you're also maintaining adequate coupling. The robotics people are

coupling. The robotics people are actually very interested in ultrasounds.

Um >> so it's a pretty advanced problem.

>> It is a very advanced problem that they're still trying to figure out. I

think the easiest way to navigate that would be having a huge >> Okay.

>> Yeah. Just Yeah.

>> Cool. I thought it's already like a soloer.

>> Yeah. Right. Exactly. And that's why they're excited about it because because ultrasound is so portable and freestyle like that it pairs very well with what a robot can offer. But it is a very

difficult problem.

Okay. Awesome. Um so now let's go ahead and launch into a demo. I'm just going to show you guys how our code works with this robot and you can just see what each button does and I'll talk through the box.

Before I start the demo, does anyone have any big questions?

problem.

>> No, the wall will be >> Yeah, like I had mentioned uh there are these things called safety planes which you can basically say like hey the wall is here. So, I was also looking at a

is here. So, I was also looking at a YouTube video about the safety plane, it's some other extra stuff.

>> Uh, that you can basically say like if there's a safety plane here and you're free driving it, it actually won't go back that way and it also like bounce back. Like if you push it past the

back. Like if you push it past the safety plane and then let it go, it'll like like go back into the safe space which is pretty cool. Okay, so my code

is available on GitLab. You guys can access it and use it as needed. Um, so I store all of my code in I store all my code in a couple of

different packages. One of them is UR

different packages. One of them is UR control that basically starts the dashboard, starts the interpreter, and starts the RTD at the same time. Then I

initialize all of those and then plug that into like my GUI to just say like, okay, interpreter do stuff here, RTD do stuff here. So

stuff here. So >> we also have to turn on and you guys will get to see how it kind of cracks its knuckles. You want to start explain

its knuckles. You want to start explain >> Yeah. Um and so on the teach pendant.

>> Yeah. Um and so on the teach pendant.

Yeah. This is what the teach pendant looks like.

Um in the beginning it kind of asks you what you want to do with the robot. You

don't have to actually worry about that.

Um let's just say program the robot or not. That's cool. Okay. Um, and if

or not. That's cool. Okay. Um, and if you want to pro if you want to control the robot with a laptop, you have to be in remote control mode, which is on the top right corner. Manual mode helps you

local control helps you move the robot from the teachings. So, if you're in manual control, you can actually go to this move tab.

>> Okay, if we're going to move it, we have to turn it on actually.

>> So, let's go ahead and turn it on first.

>> Are you sure you're safe?

>> It's not going to punch you.

>> She's a Right. We all watched the video.

Right. We all watched the video.

>> I don't know if you got to see the video in the beginning.

>> What's your robot's name?

>> My robot's name is Rhonda and we love her very much.

>> Robot have a name.

>> Yeah, you guys need to name your robot.

We named our robot so that we can go up into the clinic and when we're introducing the robot to the patients, we're like, "Her name is Rhonda." So

that they're not scared because it's huge. It's a huge robot. You guys have

huge. It's a huge robot. You guys have all seen it. You know what I'm talking about. But um yeah, I can imagine as a

about. But um yeah, I can imagine as a patient who is laying there waiting to be imaged, do you see this huge robot being wheeled into the the room and you're like um so we call her Rhonda. We

were thinking about sticking some fun googly eyes on her. That would be fun.

>> Decorate it, >> right? Decorate it. Yeah. So once you

>> right? Decorate it. Yeah. So once you press the robot, uh press the on button first, it'll go into idle mode. Now we

need to release the brakes. So I press start. It's releasing the brakes.

start. It's releasing the brakes.

it kind of uh kind of cracks a little bit. Sometimes it'll jolt a little bit.

bit. Sometimes it'll jolt a little bit.

So, a tip that I'll share with you, I am a little hesitant about putting the transducer on the robot before it turns on because it does that like weird jolt motion. I don't want to, you know, the

motion. I don't want to, you know, the transducer is fragile. Let's keep it from vibrating. Uh, so I put the

from vibrating. Uh, so I put the transducer on only after the robot is on. Okay. Um, so now if we want to go

on. Okay. Um, so now if we want to go into move mode, there's this move tab where you have the robot here and you can actually move it in whatever direction you want relative to base

coordinates. So this is TCP orientation

coordinates. So this is TCP orientation and TCP position. So TCP position is if I want to move it left and right or I

guess this is Y direction and then this is X direction.

>> This is all moving now.

>> So this is the X direction I think. And

then this is the >> I think this is the picture. You have to change the picture.

>> Yeah, you're right. This is from my um Yeah. So, now that we're in tool

um Yeah. So, now that we're in tool space, Z is forward and backwards, I think. Yeah. Z is forward and backwards.

think. Yeah. Z is forward and backwards.

X is up and down, apparently. And the Y is left to right.

So you can move the robot based on the tool view which is coordinate system from tool. If you want to move it in

from tool. If you want to move it in base coordinates you can change this tab. Instead of saying tool say base. So

tab. Instead of saying tool say base. So

now I want to move it in Z which is up and down.

This is Z. Y is kind of back and forth and then X is left to right.

>> Yeah. So there are several joints right?

Yes.

>> But like it can calculate by itself.

Yes. Yeah. So the the UR internal system does all the forward kinematics and the reverse kinematics as well inverse kinematics as well. Um and so that's it's useful to know kinematics just so

that you know what it's doing. Um but if you give it a pose that you want it to travel to with relation to the current pose, it will calculate that and then so you can move the robot from the

Tender.

You can also move it via remote control.

So, in remote control, move all of that other stuff. The manual stuff is

other stuff. The manual stuff is disabled, which means that I'm only accessing it from the my laptop.

>> Can you reach the ceiling?

>> I don't think so.

>> Yeah, you can.

>> Can Can you just demonstrate like how how long?

>> Yeah. Let's see. So, another thing that you can do while you're in local control is you can put it into free drive mode.

So, free drive mode is something that we use very, you can also do. Hold this for me and then can you hold this button?

Sorry.

>> So this is free drive mode. Okay.

>> Which means you can move the robot.

>> Okay.

>> And what if you let it go?

Yeah. So this this joint this way.

>> Oh. Oh. Okay.

>> Tada.

>> So right.

Our big UR is pretty close to the top.

If you put it in the top, >> does it know the position of where it is?

>> As of right now, it does. Yeah. So, you

can see Did you see that? So, where I was, if you could um I was trying to move it like this and it was giving me resistance resistance

because there's a safety point over here.

>> So I need to keep this move.

>> How does it know there's safety there?

>> We can configure. Okay.

>> Oh, so in real time it actually detects where it is.

>> Yes, it knows where it is relative to the base coordinates. If you define the safety relative to the base also check for collisions.

Um yes if it if it does detect a collision and then shut down >> but it doesn't know the existence of this.

>> You have to define this.

>> We have to define this.

>> Yeah. So you define this in the teacher.

>> Okay. So it doesn't know it has this attachment. Okay.

attachment. Okay.

>> So yeah what we did is we measured like where this is to where the end of the transducer is. We said actually the tool

transducer is. We said actually the tool goes out up until this.

>> Okay. So also when you do that if you want to if you want to give it a rotate around the tool command if you say okay rotate around the robot command it'll rotate like this right but if you say

rotate around the tool it'll actually rotate like this it'll change its radius of rotation um depending on where you >> Okay cool. So now we're going to switch into rope rope control. Anything else

you manually you want me to go over?

>> Is there a recommended way to out it or recover it. For example, you part this

recover it. For example, you part this way and then you want to rotate it in the opposite direction or you can >> you can go either way. So it it it has joint position limits. It can rotate 360

one way and then 360 another way and then it can't rotate more than that. So

I mean when we do yeah when we do our rotational Y positions I ask for the current position and then depending on where that position is I will rotate one way or the other way based like just to make sure it's not hitting up the limit

or going on to the side.

>> You got to think about where your cord is too.

>> Exactly. Yeah. Exactly. Yeah. Um yeah.

Okay. So now we're going to be in remote mode and with the robot I am loading my code. I'll just open the sheet.

my code. I'll just open the sheet.

>> Yeah. So there's a there's a cheat and it's always going to be from the control box. So,

box. So, >> yeah.

>> And then you can connect.

>> If you want Yeah. If you want to control a robot using a laptop and an Ethernet cord, you have to do all of this through >> wired. No Wi-Fi.

>> wired. No Wi-Fi.

>> No Wi-Fi.

>> At least not my knowledge.

I need like those XYZ stage. This could

replace those XYZ stage. We still need a Sarcastic frame.

>> Sure.

>> Yeah. So, this is our guey that we use for our rotational lock positions. Um,

I'm just going to walk through the buttons really quickly. Um, so this left side is like unlock, free drive, free drive, and then lock or no free drive.

So, if I do want to free drive the robot, I press the free drive button and then it lets me move it.

And basically all this is is this button sends a command via UR script like to the interpreter that just says, "Hey, turn free mode on." That's it. Then if I want to walk it, it says, "Hey, turn

free drive mode off." That's it. If I

want to move, my down up commands are a little relative because when we're doing phantom imaging, down means forward to push down into the phantom. But move

down, I can say, "Okay, move one millimeter down or move 5 mm down."

>> So this is a base, pardon me. So this is actually tool coordinates because this is moving uh Z is this way no matter where the transducer is pointed or Z is

forward to this.

>> Yeah. Yeah. I down the >> um >> traces.

>> So these so at the same time that I'm sending the robot commands by the interpreter, I'm also engaging with the real-time data exchange to monitor the force on the endector. This is really important because when we're doing sway

acquisitions, compression can change the shar speed. Um also we don't want to we

shar speed. Um also we don't want to we want to make so that we're not compressing the person too much. So, let

me go ahead and zero the force. But, um,

whenever we're monitoring the force in real time, if I push in, compressional force is what I'm we're mainly worried about. So, if I push on it, you see,

about. So, if I push on it, you see, okay, there's a lot of there's a spike in the Z force or if I push it this way, spike in X force and push it this way, spike in Y force. And so, these are

forces being reported at this roll flange. Um it has like a I think the

flange. Um it has like a I think the repeatability or the precision on this on this tool um on this force sensor is like one Newton or something which is pretty high. Like it's not the precision

pretty high. Like it's not the precision is not all that good but it's good enough for our applications.

>> Yeah. Um and so at the same time I'm I'm monitoring force and I'm displaying force for our real time feedback. But at

the same time I'm also asking for position. I can plot position also if

position. I can plot position also if needed. Don't need that in real time.

needed. Don't need that in real time.

>> Wait, you can do uh stress training elasttography with this. The problem

people don't know the stress, >> right? I guess. Yeah. Yeah, you could

>> right? I guess. Yeah. Yeah, you could you Yeah.

>> But the direction is difficult like the stress.

>> We could do basel.

>> Yeah.

>> But but then that solves a big problem.

Another thing is that yeah because we're pressing in I think if we use a plate it would be easier to tell because also I guess there's a hogous >> yeah but you know the exactly the

geometry of the surface you know you read the force right you should be able to get the >> but we can do the visor that visor like >> um okay and I have rotate buttons again

just saying hey rotate 180 degrees all that's through the interpreter and then when we actually send a rotational uh uh like a acquisition command or an acquisition script. All of that's

acquisition script. All of that's through the rotate trigger button. So

whenever I do that, I basically say, "Okay, I want you to stop doing whatever you're currently doing and just send these five commands, which are stop free drive, rotate, start sending triggers,

and then stop rotating after 180 degrees." And so all of that is conveyed

degrees." And so all of that is conveyed through like like I said with RTK, you can just stream some positions in a text file. I'm doing that for the for the

file. I'm doing that for the for the interpreter. Instead, my text file says,

interpreter. Instead, my text file says, "Hey, get the current position of the end joint, what the actual rotational angle is of the end joint. Depending on

those limits, either rotate left or rotate right 180 degrees." And then once the rotation is done, stop and then put the drive mode back on. And so this is just in what, eight lines. And I say

text files sent to the robot. And then

the robot will execute them in sequence.

Um, This is for like the basic rotational commands where the various scient works and how the real-time data exchange is helpful. And so actually

monitoring this in real time. This is

just a big while loop which again treats the robot kind of like a state machine.

While the RGD continues to send me information, the robot will keep running until I tell it to stop. Um, and so at every single iteration of the while

loop, I'm recording force position, um recording force position, speed of each joint, the end angle of each joint, and also the analog output of the trigger,

uh of the of the control box to see like, okay, when I send out the triggers and stuff. Um, and so that's

and stuff. Um, and so that's you have to specify what data you want, but there's a wide variety of things that you can request from the robot and things that you can program for your

robot. You construct the GUI.

robot. You construct the GUI.

>> Yeah, >> pretty good. I saw the as well.

>> Oh, yeah.

>> I have another guy with >> I know once you discover the making software.

Yeah, it's very addictive.

>> Um, but yeah, so that's the demo. Does

anyone have any questions about >> make it do the rotation?

>> Yeah, let's go and make it do the rotation.

On the other robot, we had to like configure the output voltage.

>> Yes. And so I'm actually sending the triggers out from the robot via the analog output rather than the digital output. I could just do high, but I know

output. I could just do high, but I know that the Verasonics takes like 3.3 volts or something. So I used analog out to

or something. So I used analog out to tell how much voltage to send out.

I can't remember what the high value is on the robot, but if it's high enough for your application, then you can just use the digital digital idle. Okay. So,

conduct to conduct an actual acquisition, I I use my rotate trigger button and then it stops everything.

It's still monitoring force and stuff.

So, there's some motor control force associated with moving the actual thing.

But then it rotates 180. It finishes

rotating 180. While it's rotating, it's sending triggers out there.

>> That's it. And then it's back.

>> How can I demonstrate how fast it can go?

>> Yeah. So, it can actually rotate quite quickly. If I change my rotate,

quickly. If I change my rotate, my current, it can go up to I think the the highest that I tested was 80

uh degrees per second maybe or 100 degrees. Let's go to 80.

degrees. Let's go to 80.

That just a little bit faster.

>> Is that actually faster?

>> I think it was the same. Yeah, maybe it didn't actually update. Let's try one more time.

>> Yeah, right.

Huh?

>> Is that the speed? Is that the speed limit?

>> Oh, yeah. Yeah. If we Did you put uh speed limits on it? That's probably

>> the speed limit >> really slowly.

>> It can rotate I think up to 190 I think uh per second was what the limit was. Um

yeah, we keep ours at 20.

>> I was just we have tested it up to 80 and we see that another is that actually the artifact gets that that motion vibration induced motion for rotation gets worse at higher rotational speeds.

>> Okay.

>> Um so we kind of keep it low enough to minimize that as much as possible while still fast.

So >> okay awesome have any questions?

the finest.

So yeah, so the we've done 1 millm fine >> 1 millime >> but also the specs online close repeatability I think for the robot at like 30 micron >> 30 micron

>> I don't know how true that is but at least that's the number that's quoted >> because if we want to do like 30 scanning we need to know >> yeah I would just say so that's what online it says 30 micron I would test it

say if see if that's true I think our UR10 quotes like 50 micron or something we haven't tested Maybe look at a image like like see if it translate 30 micron with some sort of

point target and see if it moves >> maybe >> RF. Yeah.

>> RF. Yeah.

doesn't really >> Yeah. Yeah. Yeah. I think so.

>> Yeah. Yeah. Yeah. I think so.

>> Yeah, you can. You can do that.

>> Um >> then you can I mean I'm not joking. Try

this like but you can spend on one point. No no no not like that. You can spend on one point for 100 milliseconds. Move the

next point. You can scan 10 different brain planes and then come back and just keep doing this in one second. Right. So

like a 10 herz body and then you can tensor kind of performance version of like 3D. they can see 42 points they can

like 3D. they can see 42 points they can already get pretty good like understanding of the brain response right so if they think even maybe not

like even four planes a second >> I think it will be really helpful 3D >> that that is something that that

you also write a code for write write a code that >> okay awesome thanks Yes. Thank you.

>> Thank you.

>> Thank you.

>> Thank you. You saw the robot.

>> You can be the center of the >> Oh, yeah. Right. my sales is leading up to

Loading...

Loading video analysis...