Analyzing and understanding Tobii Pro Glasses 3 recordings
By Tobii
Summary
Topics Covered
- Nurses Intercept 85% Errors Despite Flaws
- Assisted Mapping Saves Manual Effort
- Scanning Efficiency Predicts Error Detection
Full Transcript
good morning or good afternoon depending on where you're calling in from in the world and Welcome to our second installment of our Toby Pro lab tutorial series
today we're discussing wearable eye tracking and analyzing and understanding Toby Pro glasses 3 recordings this webinar will be recorded so feel
free to re-watch or share with students and colleagues this is the second tutorial in this series that we have planned we also hosted a tutorial about screen based eye
tracking stimuli screen based eye tracking with static stimuli last week the recording for that tutorial can be found on our on-demand webinar hub
but for now we'll move on to today's agenda today I will show you how quick and easy it is to set up a study in tobypro lab our all-in-one software platform
we will be using recordings from tobypro classes three wearable eye tracker finally we'll dedicate the rest of time for an interactive q a panel with my
good colleagues Carson gondorf our Toby Pro lab product manager and tomagodi who's a product owner for tobypro lab throughout this entire session please
use the Q a function in the go to webinar panel if you have questions and we can address them during the Q a portion though you might think that implementing eye tracking in your research is a
daunting Endeavor I want to showcase that tobypro lab is fast to set up and very easy to use tobypro lab will support your entire research workflow from start to finish
including study design calibration recording as well as qualitative and quantitative analysis and data export without other tools needed
tobypro lab is flexible to a variety of testing situations it's compatible with all of our Hardware products it helps enable freehead movement and a comfortable testing experience for
participants and finally you get quality data with metrics you can trust great precision and accuracy and a good history in this field with over 10 000 published articles setting our products
and more every day so now we'll get started showing you Toby Pro lab this is the topipro lab overview page
to start a new project you can click one of these project types give it a name and see where it'll be saved there are four different project types
the screen based project type is for use with our screen based eye trackers the Toby Pro Spectrum Toby Pro Fusion and Toby pro nano as well as our older systems
this is the type of project you'll use when your stimuli are presented on a computer monitor we will not be using this project type today
the glasses project type is for use with recordings from glasses 2 or 3. this is
what we'll be using today for reference the steam camera project type is for use with non-standard configuration with real world stimuli or a larger monitor
you will need a scene camera to record the scene if you're using real objects this would be using a screen Base eye tracker so we'll also not be talking about this today the external presenter project type is
when you're using another external stimulus presentation software like eprime 3.0 Toby Pro lab can co-acquire the stimuli and record the eye tracking data for easy analysis with our full complement
of tools tobypro lab is based on a license key which can be activated or deactivated and potentially shared across different
PCS while tobypro lab can be downloaded on any computer you can only use one instantiation at a time per license that you own this is the license type I have a
Perpetual Toby Pro lab license and this is when my maintenance contract expires under the about and update section you can see what version you're running this is important to confirm when you're
collaborating or analyzing data as data sets are not backwards compatible this is also where you'll see if there are any updates available at help and learn you can find shortcuts
or user manual FAQs our Learning Center and more you can even learn more about our Toby Pro onboarding program but for now we will open an existing
project created with Toby Pro glasses 3 data this is the project overview page for your tobypro lab project
for a little project background and rationale this project is based on the 2011 paper by marcard and all called nurses behavior and visual scanning patterns May reduce patient
identification errors the background is that nurses are trained to verify patient information to ensure that medication is administered to the right patient so they scan between different
things like the patient's ID band the patient chart and the medication label they look for the patient's name date of birth medical record number medication and the dosage
nurses have been found to successfully intercept the majority of potential medication errors according to a study by leap at all 1995 they actually intercept roughly 85 percent of
potential medication errors however they're not always primed to identify such and another study Henneman at all 2009 reported that 39 percent of nurses administered the medication to the wrong
patient now that's not to say that nurses are the cause of medication errors because oftentimes external factors beyond their control are to blame like interruptions
distractions poor system UI design and adequate Staffing or ineffective provider communication so in this mock study we focused on examining an internal factor that could
mediate performance visual scanning patterns namely are certain patterns capable of predicting error recognition the goal of this mock study is to
establish a causal link between visual scanning patterns and error recognition that would have important implications for training so the hypothesis is Nurses who recognize errors will scan artifacts
more efficiently they will also have a longer duration of time spent looking at the artifacts and more frequent gaze transitions over the same period of time to compare single pieces of information
across the artifacts to facilitate behavioral coding and clear outcomes of the study we want to follow specific behaviors to indicate error recognition so in these videos that you'll see if
there's no error the participant will push the IV stand towards the patient if there is an error the participant will set the IV stand on the floor I want to give a special thanks to our
friends at research Collective particularly Jordan who is the patient in these videos and Dr Joseph pozek who's the participant and has allowed us to use this project
here on the project overview page you will see all of the existing recordings it shows the duration of each recording as well as the date and time that each recording was recorded
you'll also see the percentage of gay samples found this is a nice metric to consider when you're excluding recordings from the final sample you can set your own criteria for what that
threshold should be next we'll look at participant variables you can see here we have several participant variables already set up including gender age levels of formal
education you can add or edit participant variables by clicking on the plus button give it a name and then identify which values you want under that participant
variable for instance in this example we wanted to add a variable on whether the participant decided to give the medicine in the final trial
next we'll go to the participants tab this is all the participants in the study you can drop down and edit any of the participant variables for anyone here
this is all mock data so everything was randomly assigned next we have snapshots I will tell you more about snapshots soon but for now you can see that there
are nine different snapshots it's very easy to add snapshots to your project click on the plus button and navigate to the folder
finally we'll look at the events panel you can create and log events to do custom analyzes here we've identified the start of the trial and when the decision is made
so during the review of the recording I'll go through and log all these events as I watch them occur if you want to create more event types simply click on the plus button
and give it a name the system will automatically populate a shortcut that you can use as you're reviewing the timeline for easy coding but you can change that if you'd like
if you wanted to import more recordings you could do so I have a Toby Pro glasses 3 SD card plugged in to a USB port
simply click on import classes 3 recordings select your drive and tobypro lab will automatically populate all of the available recordings
on that SD drive you can click on which recordings you want to import and click import for now I've already imported all the
recordings I want so I'll hit cancel the next thing we'll do is review a recording and timeline to find your recordings you can see them here in the project overview page but if
you're not on the project overview page click on the analyze drop down and you'll see the list of all the recordings we'll look at recording number one
the first thing you'll do when you get to your recording is replay it there is sound associated with this video but I've muted it here you can also replay with the raw gaze
data in order to translate the Gaze data from the recording to something more useful by researchers we use what we call snapshots snapshots are a photo that you
can create and upload the benefits are you can aggregate all of the recordings from different participants onto one snapshot and you only need to draw areas of Interest once
assisted map data is not perfect and will usually require a manual check plus potential manual mapping but it overall will save you a lot of time
to turn on snapshots click here under show snapshot here are snapshot images that we loaded earlier if you wanted to add more now you could click on the plus button
we recommend that you map the filtered data because it's faster but you can also map the raw data as well we recommend using the Toby ivt
attention filter if your participant is Mobile in this case they were and they're wearing the glasses three you can also use the Toby ivt fixation
filter if your subject is seated or if you're using a screen-based eye tracker to start the mapping procedure you will indicate which snapshot you want to map to and then highlight the section of the
recording that you want to run the mapping for in this case we'll pick trial one click and drag these yellow tabs to highlight the portions of recording
that you want to map right click and click run assisted mapping the computer vision algorithm will then look for similarity between the
recording and the snapshot for things like contrast and color if you look up here on the top right it shows that assisted mapping is running
from recording one to patient chart for trial one since I've already completed it I'll close this to deselect the segment of the recording
right click and say clear selection you can queue up the assisted mapping for multiple snapshots and they'll all run in the background as well
once the assisted mapping procedure is done you can move on to the review and manual mapping this is where you will step through frame by frame to check the points or add missing data by clicking on the
snapshot so here's how it works you can see the fixation points as they have been mapped to this snapshot of the patient chart let's pause here
I'm going to use the arrows on my keyboard to move through frame by frame this is an automatically mapped point and it'll show you a similarity score as well
if you see an M that's a manually mapped point you'll see here down in the recording that there are these green bars and some are orange green bars indicate points
that were mapped to the snapshots with high confidence from the algorithm and orange bars mean lower confidence and may indicate areas for manual review you can also change the similarity
threshold if you want it to be higher you can drag it up and you see that changed some of these bars to orange or if you're okay with a lower similarity you can move it down so this will help
you indicate areas that may need a manual review some snapshot best practices we can recommend are that snapshot should be as flat as possible the more visually
complex the better and it should be as close as possible to what the participant actually saw we also recommend that you use digital copies of printer materials instead of taking photos so in this case we
uploaded the jpeg of this patient chart instead of taking a photo of the printout on the clipboard and then finally we recommend that you run a pilot study and review the output
to ensure that your data addresses your hypothesis great now we'll talk about event coding I showed you the event types that I made earlier but here's where you can log them
I have already done so on the timeline as you can see trial One start try one decision trial to start
and so on if you wanted to create new events you could do so here just as we did earlier and if you wanted to log them simply navigate
to the spot in the recording and either use the keyboard shortcut in this case this is H they're logged or
you can use the log event button right here I'll delete these I see it automatically adjusted the custom time of Interest
I created these custom times of Interest as the interval between the start of the trial and the decision to either administer the medication or not to create a new custom time of Interest
click on the plus button but I'll show you how I did these I use the start Point as the custom event type trial one start and the end point as the custom event type trial and
decision there are lots of other events that you can use to create custom times of interest but this was just relevant for this study these intervals can be used in later
data analysis next we'll go on to more qualitative review in visualizations you can find visualizations on the analyze drop down and click
visualizations the first thing we'll find is a heat map the heat maps are aggregated information about fixations over all the recordings
you can even select sub-segments or by individual participant variables you can change between count to duration you can even change the opacity so you
can see the stimuli underneath the next type of visualization is a case plot the case plots are individual recordings showing the order of fixations
in this case every color is a separate recording or separate participant the size indicates the duration of the fixation so the larger the diameter of
the circle the longer the fixation and they're also marked with the order that the fixation occurred this will give you an insight into the way that the participants scanned the stimuli
one of our hypotheses is that nurses who recognize errors will scan these artifacts more efficiently so we can start to understand that even qualitatively by looking at this data between the groups
there was a medical error in one of the documents for trial 3.
so we indicated which participants found the error and chose not to administer the medication versus those who did not find the error and administered the medication incorrectly these are the gays plots for the
participants who gave the medicine incorrectly in trial 3.
compared to those who chose not to give the medication you can see even qualitatively that there's a difference in where they were looking the group who chose to give the
medication incorrectly didn't have as many fixations to the patient name and date of birth which is where the errors were but for those who did not give the medication and found the error they had
more fixations to the patient name and date of birth if you'd like you can right click and save these images to file to use in your presentations and journal articles
next we will go to the area of Interest tool you can find the aoi tool by clicking on the analyze drop down and clicking on aoi tool when you get to the aoi tool you can
select one of your snapshot media and start to draw areas of Interest I've already done so but I can show you how easy it is
up here under draw you'll find three different shapes polygon ellipse and rectangle simply click on the shape that you want
and click to drag and draw an aoi you can then move the aoi or resize it you can also give it a name here
but since I've already created all the aois I'll simply delete this you can even copy by selecting the aois and clicking copy to paste then between
the different snapshots it's very helpful to do that to keep the aois between media the same size we also have a feature called tag groups
here at the bottom tag groups are a way to link aois across media so for instance I have a tag group called ID
and tags called date of birth and name so every time there is an aoi with the patient name I can click the name tag group and analyze these across all the
different media now these are static aois for our static snapshots but sometimes your stimuli are simply not snapshot friendly so we do have a
way to draw Dynamic areas of Interest depending on your Paradigm and research questions they do take more manual work than the assisted mapping tool but as possible I'll quickly show you how that works
I use the same aoi tools to draw an area of Interest so you'll draw your aoi and then manually
insert what we call keyframes to indicate the movement from one frame to the next you can even turn on or turn off your
aoi depending if the object is in the background or not so you'll go frame by frame
the IV bag is not in the frame so the aoi is turned off now when the ivy bag is back in the frame the aoi is turned back on and then you simply move the aoi
as the person moves their field of view so you can see this is a little bit more time consuming but it is possible to draw these Dynamic areas of Interest now we'll move on to the metrics
visualization you can find metrics visualization by clicking on the analyze drop down and clicking metrics visualizations
great metrics visualizations will give you a quick overview of results and Trends this is a helpful tool especially as you're piloting your data and you want to understand how your data looks
before exporting it to do high-level statistics outside of Toby Pro lab you can analyze things like total duration of an interval which we have here this interval was created as I mentioned by
marking an event at the start of the trial and an event at the decision point and calculating the length of that interval we have here grouped by condition and
participant variable of trial 3 and whether they gave the medicine or not recall that giving the medicine was incorrect and means that they did not spot the error and not giving the
medicine was correct and then they did spot the error so the total duration of the interval in seconds was shorter for those who did not spot the error and longer for those
who did spot the error this was our hypothesis and the data supports this next we'll look at the total duration of fixations and the pattern Remains the Same
there were shorter total duration of fixations for the group who incorrectly gave the medicine as compared to the group who correctly did not give the medicine this also
supports our hypothesis we can even look at number fixations and the pattern also remains you can even look at other tag groups
in this case if we wanted to compare the duration of fixations to the participant name versus date of birth we can see that across the recordings there were longer fixations toward the name areas
of Interest as compared to the date of birth we can also look at more participant variables like levels of formal education looks like there may be a slight
increase in the total duration of fixations depending on how much education you have the next thing we'll do is go to the metrics selection and Export you can
find metric selection export by clicking on the analyze drop down and clicking on metrics it's good to consider beforehand which metrics are most important for example
you can find guidance through export papers or articles from your field however these are the metrics groups that researchers commonly use you'll see people report fixation
duration which is the total amount of time in seconds that the participant looks at an object as well as the latency to fixate so how long did it take for them to look at that object
Beyond duration you can also find things like count how many fixations occurred or how many visits to an aoi occurred you can also find event metrics like
number of events or interval between events Mouse clicks or latency account we have cicad metrics like amplitude velocity or count
we also have specialty reading metrics like regressions and rereading durations as well as pupil diameter outputs for left and right pupil estimates for each sample so you can take that data and
calculate offline changes over the time from the stimulus onset we have a few different export formats that you can choose from depending on
your needs we have an interval based tsv file AI based tsv file which we'll be exporting today an event-based tsv file as well as a high level Excel report
which we'll also export today I'll show you these just a moment before you export you can also decide how you want to filter your data
you can export the raw data by itself but you can also use our ivt filters if you click on the gear wheel
it'll show you more details about the values used to calculate for that filter you can also create your own filters perhaps you work with special
populations and you need to change the ivt classifier based on eye movements that is possible here exporting the metrics is the last step from tobypro Lab before the data is
exported to then be imported into your favorite analysis software but the last thing I'll show you is the data export to find data export click on the analyze
drop down and click on data export great the data export will give you all the XYZ data for each eye for each
sample for the entire recording since our Toby Pro glasses 3 come in 50 hertz and 100 Hertz sampling rates that means you get a row of data every 10 or 20
milliseconds so these can be quite long and large data sets this is where you'll also find the pupil diameter estimates for left and right eyes for analysis and your favorite
analytics software you'll find the pupil diameters here I'll show you what this data export looks like as well great
this is our Excel report on the bottom tabs you'll find each metric I've clicked on interval duration because we were interested in the interval between the start of the trial
and the decision you can find all that data here trial one interval for each recording so it shows the total count as well as the averages
now in this case recordings one through five found the error and recordings 6 through 10 did not find the error so you could do some analysis to see if
there are group differences you can also look at data for total fixation duration and this gives you all of the attention to each aoi separately for each
recording as well as averages okay this is the aoi based tsv file it set up a little bit differently than the Excel file because this is easier to import into your favorite analytics
software like SPSS you see all the recording information is separated by row also the aois and then at the top you find all the metrics
finally this is the data export where we see all of the Gaze data throughout the entire recording this is a very long file
with thousands of rows you see here in column C this is the sensor so you see gyroscope accelerometer as well as eye tracker readings
you will see these two sensors alongside your eye tracker data so if you see blank values that's probably why this is what the data will look like and here are the pupil diameter columns
and that's it and that's how simple it is to get started in Toby Pro to import inspect and analyze your wearable eye tracking data with Toby Pro glasses 3.
we envisioned tobypro lab as a productivity software there's a job to be done and here are the tools to make it happen we hope this easy to use software helps solve your research problems obviously there's always time
to get accustomed to using any new software with this platform we hope there's a quick ramp up from learning how to use it to actually then collecting data and moving your research forward I want to thank again our awesome
Partners at research Collective for this great mock data set they said that it is available to share so if anyone is interested in playing around with this day a little bit more just send us an email we can get it to you
I also did want to share some helpful links we have here on the slide and showcase a very special event coming up from June 1st through 3rd we're
hosting an in-person three-day course called a practical introduction to eye tracking it will be at Lund University in Sweden um but it is in person there will be a
classroom full of our Toby Pro Spectrum Eye trackers and lots of demonstrations and opportunity to try both the the spectrums and our Toby Pro glasses three so you can find the link here to
register for this course so we hope if you're available that you can check it out another of the few links we have up here if you want to try to be Pro lab for
yourself you can download a demo for 30 days absolutely for free so you can click on this link and click on sign up for a free trial and get a free demo license we also have demo projects available and like I said if you contact
us we can send you this one as well so you can test it out and try some of these techniques that we discussed today with automatic mapping and and analyzing all the data
if you are working on a grant proposal to acquire funding to buy a tobypro system we want to help this is a complimentary service we offer called
toga Pro funding support services so if you want to get in touch with us about supporting your Grant your Grant proposal to help you be more competitive
you can contact us at tobypro.com FSS if you are an existing customer and you have questions about your Toby Pro System you can always reach out to our customer care team at
connect.tobbypro.com
connect.tobbypro.com and if you are a prospective customer or don't have a system yet but want to talk to someone from our sales team to get some pricing or some other specifications or answer questions that
we didn't get to today you can contact our sales team at tobypro.com contact and finally if you want to watch more webinars like this including the one we
hosted last week with screen based eye tracking with static stimuli the on-demand webinar Hub is listed there too so before we get started with your
questions we have a couple of quick poll questions for you to get to know our audience a little bit so let me get those going the first question is have you used wearable eye tracking in your
research before a couple different answers yes you're using it currently no you haven't used wearable eye tracking but you have used other types of eye
tracking and then no not yet to both of those looks like we're we have quite a lot of people who are potentially using it right now
just give another second okay okay great let me show you the results to that um looks like actually most people here are
currently using wearable eye tracking so that's really awesome some have used other types of eye tracking and and some of you have not used any eye tracking yet but we hope to be able to help you
in the future if if it's uh on your to-do list so let's look at the other question we want to know what topics would you like to
learn more about this is an ongoing webinar series so last week again we talked about screen based eye tracking and static stimuli today we're talking about wearable eye tracking but there are lots
of other things that we could show you are you interested in simuls presentation and design tools we have in tobypro lab there are lots of different
tools we have to show web uh yeah web stimuli video stimuli we could do screen recordings we can work with eprime more tips about calibration and recording
including in glasses three which I know we didn't show today or more data analysis so let us know which which topics you want to learn about next and you can check as many as you want
okay looks like we got a lot of answers here so let's share looks like many of you want to learn more about stimulus presentation some people about about calibration but a lot of people want to
learn more about data analysis so we would definitely keep that in mind as we're planning for the future and we hope that you come back and join us again for our next webinars especially if it's about data analysis we'll we'll
keep you in the loop so now I want to invite my colleagues Carson and Tamar to come back so we can start with your questions we have had
some good questions so far in the chat you can keep asking them as well we'll get to them as they come in for now I did want to ask
someone wondered how is data collected what is the software used and you know what are the things possible I did not showcase the Toby Pro glasses three
controller but that's the software that you use to to do recordings of of the glasses three you can watch live as people are going
through the through their task you can watch exactly where they're looking in real time so tubular glasses three controller is the software that we used to do these recordings
uh here's a good question what value would you recommend for the similarity threshold when doing the automatic mapping I don't know if one of you wants to give some advice on on that
yes um I mean for the for the values there you would choose it with um of course depend depend a bit on what's uh on what on what you're doing
but generally speaking we we would recommend around 70 so it's like 0.7 or 70 for the simulatory value [Music] um most of the points that uh that you
will see will have a higher value than 70 will be actually accurately mapped so that's uh that would be our recommendation for real value there okay we had a question here
um very interesting one so they had a question about date and time in the recording overview they are anonymizing the recordings to a random recording so
that students don't recognize recordings considering gdpr rules is there any way that the time and date can be deleted in the overview or masked somehow
or what would you recommend so they cannot be deleted from the overview directly I mean once you have imported those recordings they will uh they will they will show there so
there's no possibility to change it but one thing that is always possible uh at your own risk of course but the recordings you have from the glass history they contain a metadata file
that is called recording.j3 inside the recordings folder and inside there is the actual dates that we use for for this field so you could also change that dates you know to be the same date for
the recordings if you wanted to of course be careful with that uh always backup those recordings don't just edit the original ones uh make a copy of them try to edit their imported see if it
works well but that would be one way if you really want to get rid of the the dates from the data it will still show in to bring for lab once you import it afterwards but then you can have a you
know a random date like a first of January 20 20 21 or whatever it would be and perhaps perhaps one one add-on here is uh I think we'll be good to connect
to our support staff before doing this this kind of manual operations uh on the meta database would always good to check with them firsthand and uh if you like
the the submitter if if we could check in uh after the meeting on your gdpr questions I would happy to learn more about them so um please write in the
chat if you would be available for that awesome thank you both yeah Marianne if you have follow-up questions please feel free to get in contact we can get the
question to Carson and Taman help you get get this sorted great another question from Tim is there more specific information available on
how the computer vision algorithm for the assisted mapping tool process is the scene so I'd love to hear more about how that works yes so that's uh that's a question we
often get the the algorithm per se is not so it's not described it's not published in any way so we don't have like extensive documentation around it but I mean we have General
um General commands around what it does so it is it is a computer vision algorithm uh it works with local
features uh in the in the scene camera video of the glass SP recording using local features from there and trying to match them to the local features in the
snapshot that you are using so there's lots of things that go into considerations of course colors edges specific shapes that are that are considered but we don't have like
a documentation that we can refer you to and it would be a bit more extensive so it's very it's more generic I would say explanation of what it does I hope it does answer the question a bit
at least well I guess we're Limited in the things that we can share with the with the algorithm great I hope that you understand that
Tim um okay let's see great question here um how much time does it take for the
assisted mapping procedure to work so for example I think I highlighted a about a 40 seconds worth of this recording um and the question is how long does it
take to map that in the background I've heard some numbers thrown out that I'd love to hear from you too maybe I can answer that one too yeah um so typically of course it will depend
on the machine you're using and you you might have a very slow computer you might have a very fast computer but say on the on a recommended machine we see that around the second of recording
would take about a second to map so if you had uh you know more recent machines say from the past five years uh rather like a medium to high range
computer than about one second of recording for the one second of mapping hmm yeah and I think if you if you check the system recommendations on the Toby
Pro product page for two people lab we have them linked there if you want to double check and of course always um or connect portal support this open
photo every questions and uh yeah additional information on that great another question is it possible to download all the raw data at once that
wasn't possible a year ago talk about I guess some of the data output a little bit more detail
uh I don't think we touched that in the last Theory so I thought we were I mean it's possible to to download or export process and uh raw eye tracking data and
then we need to zoom in a little bit to that question what uh what was the plucker I think um um otherwise Thomas do you have any
if you could if you could elaborate exactly what you mean by raw data because I mean in the data export that my research showed there you can export all of the data for all the recordings
and that's all the raw data from you get eye tracking data from from the Adobe progress history um so it would be great if you could tell us what is what do you mean by raw
data instead of videos or is it more like the actual eye tracking data the numbers yeah so so it should have been possible
to do that one year ago as well so should answer but yeah let's let's um soak back on that one that's right yeah so Clements if you have uh clarification
for that question feel free to let us know um otherwise if you're talking about the eye tracking data it is in the the data export if that you know if you're looking about the time series data
that's the very last thing that I showed um was exporting all of the XYZ data and then it shows up as this big Excel sheet that you can see all of the gays
Direction and fixation points for for both eyes separately um question about the 3D gaze Vector data
that the glasses 3 provides is it possible to visualize those in prolab no no um it would be interesting to know what
you're trying to do and by visualizing those they are there are some solution uh I don't know if you have integration yet with the class history but we have we had
integration with glasses too before with the imotion capture with the um so you could see in in 3D the scene with the the head of the participant and
where they're looking with the Gaze Vector but that that's that's a different software than prolab yeah okay um
okay we got a follow-up from clemont it's about the videos and recordings we tested the software a year ago and we had to download each recording by itself it was time consuming
so I guess the like the recording with the with the Gaze overlay yeah so it's still um it's still the same case um we haven't we haven't changed that so
you would still have to select individual recordings to to export with the video and the Gaze overlay yes yeah but we added something in the controller on like a video mode so if you're only
interested in the overlay um so there's a new mode that you can choose when you're starting G3 in the controller something like a video mode it's a burned in gaze point that you can
just download directly from that application so you don't have to go into prolab to do this only for further analysis and chopping off your project
and um recordings data if you do that those recordings are not compatible with Pro lab is that right that's correct yes okay
okay so it sounds like there is a way to more easily download those recordings with the Gaze overlay by selecting that option in glasses 3 controller but then
those recordings would not be compatible with Toby Pro lab so if you um if you're existing customer just keep that in mind there is still a way obviously to highlight the section of the recording
and Export that with the Gaze overlay from prolab but as you are mentioning that if if you're doing that for all the recordings it is going to be a manual process
um so yeah if you have more questions feel free to reach out to us or or contact the customer care team to maybe try to try to help you out thank you so much
okay how would you suggest to analyze Dynamic stimuli in something like a mobile app maybe I mean it depends I think the what would be
and good is to consider what kind of restraint you want to have in your in your design and do you want to have a free to move participants or do you want to you know do you have
more of a lab kind of sitting so you're you have a participant coming into a lab um if you have a participant coming into the lab we have Solutions inside Pro lab
that don't use the glasses at all um so what we will call a scene camera setup so you could record a screen from a mobile phone and get that into Pro lab and the Gaze data over it on top of it
so you could analyze it directly there so that would be easier than using the glasses of course if you want to have more of a if you're fighting for more natural settings where your participant
is maybe in a context where there's other screens around or what wouldn't depend on what exactly you're interested in but then if you're if you're interested in
that interaction not only on the mobile phone but for the things and we would recommend the glasses for multi-screen scenarios for example um so it will depend a little bit but there's both ways are possible it's
always good to try to avoid using glasses if possible uh as it's it's it's more complicated when it comes to the data
analysis so try to be as restrictive if you can as possible you know even have a mobile app tested on the on the laptop screen with a screen based eye tracker
is also an option of course you won't have the natural interaction uh of a touch screen but that's also that's also a possibility so you need to really see what your what exactly you're interested
in in your design and then have a an experiment that's built so you can answer your question and then you need to trade off on how natural that will be for the participants
yeah that's a great answer um I have done some some quick demos of looking at my phone while I'm wearing the glasses and walking around a space
and if you take a snapshot of your phone screen I have an iPhone the software I can do the mapping procedure to a snapshot of the background and if you you know have somebody using an app and
you know page might look different here or there you're going to find some times where the mapping doesn't doesn't translate to the snapshot but you could have a lot of different snapshots so if
if you really wanted to have um the participant in natural environment using their their mobile device you can do snapshots and kind of and do what we did today but I think uh
Tomas points of you know controlling the situation are are really important and realize are you really looking for the UI or the ux design aspect or are you looking for you know how the person uses
this as they're kind of going about the rest of their life so those those are those are really important points to consider awesome okay we got a follow-up question about the 3D
gaze vectors from Tim so the question related to 3D gaze vectors um because the integration with policies is not out yet and I would like to assert how accurate the 3D gaze Vector is I
tried to understand the data export from prolab but is in the hucs coordinate system um yeah so tell us a little bit more about that
so I'm surprised I mean we have there's a small drawing there on in the in Toby Pro lab user manual uh actually describing what this hucs is so it's
it's the head unit coordinate system the acronym for that and uh it's it's basically a accordion system that is fixated to the to the head units of the
glasses so that's the actual glasses um and this is centered on the scene camera so the camera you have in uh in the front of the frame here
um so you can you can trade out you get XYZ coordinates for the different directions so I think X is to the left of the glasses Y is going downwards and
Z is going in front um so yeah that would be some the the actual coordinated system
um so yeah okay so I see you on the answer that the UCS so it's it's the same uh I think it would that would be the same coordinate system if I'm not wrong I will need to check that exactly
so that was the documentation from the glasses and that's in the appendix right yes yeah So for anybody else who's interested in the head unit coordinate system that
information is in the appendix of the Tobi Pro lab user manual you can find that on the Toby Pro lab page or if you Google Toby Pro user manual which I do all the time it'll take you right to it and you can look in the appendix or the
appendices because there are several different coordinate systems that are explained for the data awesome okay I have a question from Juke grin
hydrogrin can you share the list of related literature related to eye tracking data analysis so we do have um
we had a zotero database I think we're working on updating our database of literature I can send you the one that we have
um is there something else that's more updated on our website that either of you know about right now um no it's in flux so there's no no updates I think the the course that you
mentioned in Lund would be perfect for data analysis so if you have time and can travel there so sign up that's cool
um otherwise you're gonna email you uh that that zotero link or whatever I have available after today cool we've been in touch she was actually at
the webinar I hosted in March in person at the hfes conference where we talked about this data set um okay our Dynamic snapshots absolutely necessary and better than the static
photos of the targeted objects in assisted mapping so I guess um yeah comparing the this assisted mapping procedure to
Dynamic aois that's right I guess there was a little bit of confusion maybe there um so the snapshots that you add are so they always start taking me that's an
image and that would be interesting in the all the cases where you have a stimulus or a scene that or stimulus
scene basically that your participant is is exposed to that is the same across across all your participants and recordings then it's very easy to aggregate on it because you can have one
image of it and like in the case that Marissa was showing there um but if you have in some cases you you might not have that maybe it's a
very very Dynamic scenario that you have and thinking for example someone playing golf and you're interested in where are they following the golf ball with their eyes or not
um then in that case a scenario with the snapshot like that will not work well um with the assistant mapping or any kind of mapping so you might want to uh
to in this case use the those moving aois it's a bit more tedious because you will have to go recording by recording and do the analysis you know frame by frame moving the the area of Interest frame by frame
so that can take some time um so I think it's um it varies depends based on what design you have new experiments and the more Dynamic the
content um successfully assist mapping will be I guess right we can say that exactly but you always have the manual fallback
solution that you can map um like manually basically so it's always a good option right but no it's not necessary to do both I was just trying
to show very briefly how you could do it I think we could spend an entire webinar just going into the specifics of dynamic aoi design and probably even snapshot
mapping too but there are two separate things for for two separate purposes um so definitely not necessary to do the Dynamics the dynamic aoi analysis if you
have snapshots we highly recommend using snapshots or for those things that can't be snapshotted not snapshotable then the dynamic option is there
awesome I think I got all the questions that are currently in the chat box if anybody has any last minute questions please feel free to throw them in I also wanted to
quickly say did seem like there were a couple people who had some technical issues at the beginning but if any of you are watching this on the replay we are going to be hosting another actually
another few sessions with Carson and Tomah I think that being able to ask the the people who are designing the software questions directly is is super cool especially if you're already an
existing customer so having this platform is great and we so appreciate both of you being here so if you're watching this as a replay and want to join again in the future and ask them
some questions please feel free to do so we have a couple more dates I think it's June 8th and June 15th and uh the the
sign ups for those webinars will be on our events page on our website um okay Tim has another question I want to use the eye tracking data in a
uh human robot's interaction okay great I I just heard the question so I mean we
know we don't have support for like Robot Operating System but we if you could use the the API that comes with the glasses um so there's no it's it's an API so
there's no as long as you have a language there that supports like http and the rest you will be able to work with the API and ask the data directly
to the glasses and stream them to your uh to the robot so you can make decisions there if you want so that's that's definitely something you can do I think the challenge there will be mostly
how how to interpret the data but I I leave that up to you um I think so that that's uh interesting but tricky thing to do here
but I I think Tim Hart has a little bit more complex from a use case here I think if you sum that up for us and then um put that in the support portal we we
try to to um catch up on that and then see how we can get to a solution together there was an API um and qualities webinar I think earlier
this year and when we released the class of three there was a webinar around the API I think we we have that on our website somewhere where available so it should be uh not sure if it's easy to
find but you should be able to find but we can link to that as well Tim that I if you Google tobypro API turbogglasses 3 API the webinar is at the bottom of that page
I I Google things all the time I mean I'm sure I can navigate on our own page but Google's so helpful um okay last questions uh is there an
eye image recording function in the plan for glasses three in classes three controller can we answer that I think it's under a discussion I would
say um I don't know if it's in the plan but it's being discussed so you will get more information regarding that I guess in the in the
near future I hope so not much we can share at the moment yeah it's definitely um we are aware and yeah
but yeah keep in touch if you are an existing customer with glasses too and waiting to upgrade please keep in touch with us because I think that's important tops also well I mean if if you like to
add uh what do you like to use it for uh that would be also interesting um would you like to what what your insights do you like to get from that
yes r d teams do really like to hear customer use cases when they're trying to decide between you know how to move forward in planning so whatever details
you can provide you can send it here you can send us a message afterwards with a little bit more details on what you um would use the glasses three images for so
yeah let us know keep in touch um if there are no more other questions at this point we will wrap up and say thank you so much to Carson and to ma thank you so much for being here today
if you again if you are existing customers you have questions contact us at connect portal connect.to pro.com if your prospective customer tobypro.com
contact and we would love to chat with you more and hope you have a great rest of your day foreign thanks everyone
Loading video analysis...