Upgrade to Pro — share decks privately, control downloads, hide ads and more …

UXA2023 Amelia Purvis - Your new cheat sheet for note taking and synthesis

UXA2023 Amelia Purvis - Your new cheat sheet for note taking and synthesis

As a UXR I spend a lot of time note-taking and would love to share a note-taking strategy that saves my team and I a HEAP of time (and money). The new strategy is visual (which clients love) quick to synthesise (which researchers love) and easily translates findings into design recommendations (which designers love). You can thank me later.

uxaustralia
PRO

August 25, 2023
Tweet

More Decks by uxaustralia

Other Decks in Design

Transcript

  1. Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    www.captionslive.au | [email protected] | 0447 904 255
    UX Australia
    UX Australia 2023
    Friday, 25 August 2023
    Captioned by: Kasey Allen & Bernadette McGoldrick

    View Slide

  2. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 57
    STEVE BATY:
    Our
    next talk is Amelia, who joins us from Adelaide, which is not common. We
    don't get a lot of speakers coming here from Adelaide. But Amelia joins us
    from Adelaide with the Symplicit team. Please join me in welcoming her
    to the stage. Thank you. (APPLAUSE)
    AMELIA PURVIS: Good morning, all. Lovely to be here. Thank you for
    having me. Like Steve said, my name is Amelia. I am a director and a
    psychologist. We've heard a bit about the psychologists this morning from
    Zoe. You'll notice I'm not middle-aged and I'm not a man, so we've come
    a long way. (LAUGHTER) I have a very short attention span. I get bored

    View Slide

  3. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 58
    very easily, so I'm gonna give you the ending up-front. If you're gonna
    write anything down today, these two things: If it's not working, fix it.
    And tell the story. So, part of what we do - and what I really enjoy about
    my job and my work, and I hope you guys are all the same - is that we
    see things that are wrong and we say, "We can make this better. We can
    make this more efficient. We can improve this. How can we do it?" And
    the second is to tell the story. We are storytellers. We collect research
    from people and we're able to explain this research to other people and
    say, "Look what we found out." Write those two things down - that's what
    you're gonna take away from today.
    You'll notice about me, I get bored easily and I am basically a child
    charading in an adult's body. So, we will start at the very beginning, as
    Maria tells us. You'll see a couple of these throughout the slides. Where
    do we begin? So, when we're talking about note-taking, the systems that
    I have seen can be very slow and awkward and cumbersome. I don't
    know how you note-take at the moment - whether a Word document or
    Excel, like this. What can happen, though, is they're very clumsy and
    awkward. You can see participant one down to participant 15. So, we
    have notes for each participant on a different tab. We also have each of
    the pages that we're interested in looking at - so, this is for usability
    testing - each of the pages that we're interested in looking at, we've got
    up the top. And then our questions in bold underneath them. And then
    below those you'll see the notes that the team took throughout our
    usability testing. My issue with this is you can't compare across
    participants, you can't easily look at the data and say, "This note relates
    specifically to this element in the page," and it's awkward and clumsy and
    I don't like it. So, what we did, what we were looking at is thinking with
    the end in mind. Quite often, what we see is when we deliver back to our
    clients, they've asked us a question and they want to know, of this slide,

    View Slide

  4. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 59
    of this page in my website, what's working and what's not working? And
    we will give them a key code and we'll say, "This element is green, it's
    working. This element is red, it's not working." We are so far from that
    here, right? Like, we're starting so far in a position where we then have to
    transfer this into a Miro board and have to synth it and analyse it, and
    then we have to make so many steps to get to that end point. It wasn't
    working to me. It didn't quite click. I hope you've all seen this slide - this
    is one of my favourites. Tell me why? Why do we note-take? We have a
    client who's asked us a question, particularly for usability testing, we have
    a client who's asked us a question. They want to know something, they
    want an answer. Do we have to make changes? Do we have to improve
    things? "Is it really great and we can just keep it as it is?" We need to ask
    ourselves this, "Why do we use the templates that we have?" Is it
    because that's just what's always existed? We know that there's huge
    danger in that, we can't use what's always existed. We need to question
    ourselves and ask why we use the things that we do. Alright, go here. So,
    what I was thinking with the team, what was working and what was not
    working - let's start at the end. So, ultimately, we are going to do some
    usability testing, we are going to give our client the results. Why don't we
    start here? Instead of working in an Excel document, instead of working
    in a Word document, we're going to start in Miro, we're going to put our
    pages we're interested in, chuck them straight into Miro and note-take
    directly into Miro. That's gonna work and it will be great. It did not work
    and it was not great! (LAUGHTER) So, this was about a year and a half
    ago, and a couple of the team who were on this project, we had a big
    debrief after this first session. I said, "It will be great, it will be fun." It
    was super confusing. The team had to note-take live during the usability
    testing session. Comments were everywhere. It was all sorts of
    not-working. What do we do? We're HCD practitioners, we iterate, right?

    View Slide

  5. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 60
    Everything gets a box. But everything in a box - this is what the client's
    interested in, tell them the answer straight up, up-front. Client wants to
    know what people's comments are relating to that specific element. We're
    gonna tell you. We're gonna tag these comments also. So, we're gonna
    tag them by participant, so you've got participant one, participant two,
    participant three, in your different tagging sessions. By the end of three
    participants, we can see where things are starting to cluster. We can
    see - I was about to swear then - I won't! We can see where these
    clusters are and we can see what's actually happening in these groups.
    This tells us what's good and bad and where opportunities are. I'm
    simple. My good and bad opportunities are red and blue. We can
    colour-code these, live or straight after the usability testing session.
    There's always a question that we need to answer. Pop the question in
    the box as well. So, you've done three usability testing sessions this
    morning, they've taken three hours, so maybe that's the morning's work.
    The client comes in, or a stakeholder comes in and they go, "How has the
    morning gone?" And you've gone, "Great. We're seeing a couple of little
    issues coming up here and opportunities coming up here. We've built the
    picture already." We don't have to spend huge amounts of time synthing.
    I'm not saying this is where you stop, because it's not, this is not
    storytelling, this is findings, these are raw findings. But we can see
    straightaway. To build on this - at the moment, you've got individual
    cards that are raw notes from your usability testing. These are findings
    cards. I found that each time that we were doing use ability testing, we
    would end up thinking about, and talking about, our findings in this same
    way. And so what I did was I popped it in a findings card. What's the
    finding? Users understood that "download" meant they could download
    and save/print/share the form. What's the observation? This is our
    evidence. How do we know this to be true? In this case, we've used direct

    View Slide

  6. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 61
    quotes from usability testing. We've got a couple of different options here.
    So, I've got three, four different participants here, so we can see a
    variety. Was it just one participant who really hated that or are we seeing
    a pattern starting to emerge here? Sometimes our evidence might be our
    reviews and evaluations. We might also draw evidence from other sources
    to verify our finding. And what's our design recommendation? What's the
    action that you need to take as a result of this? I've told you what the
    issue or the opportunity is. You can see it's green, so this is something
    that's working well. I've told you why and what evidence the supporting
    that is, and this is what you can do moving forward.
    So, all of a sudden, by the end of the day, you now have all of your
    raw notes and your findings card in this format, ready to show the client
    or your stakeholders. I can see the phones out - we like this slide! It gets
    better. We can colour-code our findings cards. Again, keep your raw
    notes, by all means, OK? This is data integrity. Keep your raw notes. I
    always do this in Miro. Duplicate your board, keep your raw notes, and
    then have these as your findings cards. Remember, your findings cards
    have your quotes and evidence, so this is really good. You can scale it as
    well. So, we have had pages and pages of these. It's really easy, scalable
    system that actually works really, really nicely.
    Second point - it's not just about the findings. Our responsibility is
    to take these findings, what we hear directly from our participants, and
    form them into a narrative that makes sense. You'll have these
    opportunities and these things that aren't working, but what's the story
    here? How do you piece that together? Take your findings cards and start
    playing around with them. This is where you can get your red string out
    and start connecting the dots. Maybe there's something going on here.
    Maybe there's a play in here, using your findings cards. And then this is
    your narrative. Where you started is gonna be completely different to

    View Slide

  7. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 62
    where you ended. What the client asked you and what the problem is are
    probably gonna be two very different things. This is why we get paid the
    big bucks! This is what our responsibility is - and we have been talking a
    lot about ethics this morning. We have a responsibility to participants who
    we have invited in to be part of our research, that we tell the story. What
    they tell us and what the bigger story is, can sometimes be very different.
    We've asked them very specific questions. They're responding to our
    questions. We need to be the ones to piece together the pieces of
    information and present the story, the narrative, back to the client.
    Show me it in practice, Amelia, I hear you asking! OK, I will! So,
    this is the team, a beautiful team up in Brisbane. They did this project a
    little while ago. I won't go into too much detail about it because of client
    confidentiality, but we have been approved to share a couple of key
    pieces of this project with you all today.
    They are very clever and they used a ratings code that was much
    more sophisticate than my good, bad and opportunity. So, you'll notice
    that their findings cards are colour-coded by these. I'll leave that up there
    for you to take photos. It's a good one.
    So, this is their page, right? Everything gets a box. This is the areas
    that the client is interested in. You can note-take directly in. We can see
    the tags for each of our participants - they're coded in. Then ta-da! Raw,
    client presentation. Raw. Client presentation. (LAUGHTER) Really nice,
    streamlined approach here. Can we see the symmetries of visually we
    haven't had to make so many leaps from where we started with a Word
    doc or an Excel doc, and then we synth it and go around and around and
    around. We can see this coming out very clearly. And again these are the
    key things that we're interested in, this is what the client's asked
    for - what's working, what's not working, who said what in terms of the
    participants that we're hearing? Client presentation.

    View Slide

  8. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 63
    Favourite quote from The Blindside - you're welcome. This
    framework that we're talking about, if you'd like to take it into your
    organisation and use it, please do so. You can call it Synth it Directly, or
    SiD, as was cleverly come up with by a friend of mine. What I would even
    more enjoy for you to do is, if this is a framework that works for
    you - might work for you or might not work for you, take it into your
    organisation and play with it. It's an invitation to break it apart to see
    whether it works or whether it doesn't work, to grow and evolve it. I am
    giving you, offering an option that you can take as opposed to maybe a
    Word doc or an Excel doc. Try this - maybe it works, maybe it doesn't. Let
    me know and we can grow and learn together. That's it. I said I was
    gonna keep it short and shiny. That's what I wanted to share with you
    today. Please ask away any questions. Otherwise, it all makes sense and
    I'll leave you to it. (APPLAUSE)
    STEVE BATY: Questions for Amelia? Kit.
    >> Thank you very much. Very interesting. I'm really interested in the
    part where you were talking a little bit about the client presentation story.
    So, how much do you show in place, in a Miro document or something
    else, in a client context, and how much do you deliver as an asset that
    the client can hold on to?
    AMELIA PURVIS: So, what do we deliver in the presentation, that's what
    we deliver to them as an asset. For this project, in particular, this is what
    was delivered because this is clean. So, we've also gotta be careful of,
    depending on what you've agreed on with your participants, what you can
    and can't share. So, do you need to anonymise data? You need to be
    really careful around what your consent form is when you've actually

    View Slide

  9. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 64
    interviewed them. I, in terms of what we're finally handing over to clients,
    if they've got this, they don't need the raw data. In my opinion. I've never
    handed over raw data like in these projects that I have been running with
    it, I've never given them the raw. They've been very happy with this. This
    gives them the story that they need, whilst keeping that client - that
    participant confidentiality. Did that answer your question?
    >> Yeah, I think it does. You deliver this in Miro or you deliver this...?
    AMELIA PURVIS: We will export this slide. So, the girls, what they did was
    they exported that. This, I've cropped from their PowerPoint presentation.
    Yep, obviously with the legibility, you're gonna ask me about that - these
    cards, then, in terms of keeping confidentiality for our client, I've
    scrubbed these slides. The cards then are bigger and legible for the clients
    to engage with. Yep.
    STEVE BATY: Just here.
    >> I have a question about when you do usability testing and the number
    of participants is not very high - let's say six - and you have six design
    alternatives that you've testing, and none of the participants agree on one
    direction, then how do you synthesise that? How do you go about it?
    AMELIA PURVIS: So, you've got six participants and you've got six
    different options that they're testing, and the participants don't agree on
    any?
    >> Like, they have different preferences of what works and what doesn't
    work.

    View Slide

  10. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 65
    AMELIA PURVIS: So, they have different preferences on what works and
    what doesn't work - is this objective or subjective? Are we looking at the
    action that they're completing or are we looking at what they're saying?
    >> Both.
    AMELIA PURVIS: Both. So, the question is - what do you do when your
    data from participants doesn't give you a story?
    >> I follow the same style of tagging and then collecting, like, relevant
    feedback on specific design elements, and then sometimes it's hard to
    come up with a story because you can't connect the dots. Does that make
    sense?
    AMELIA PURVIS: That's the exciting place. Yeah. So, I would be looking at
    both your objective data in terms of what did we actually see from
    participants? Because what they tell us and what they do are two
    completely different things. We know, "Did you eat all your fruit and
    vegies today?" "Yeah, of course I did." "Study?" No, they didn't. The
    subjective data is in response to what we've asked them, which can be
    very anchored and biased, so we need to be really careful around that.
    Also, we have those beautiful heuristics, so we know a lot about human
    behaviour already that we can take from that. So, if you were testing
    across six and putting forward a story to the client, I would be relying on
    the objective data, I would be relying on other evidence that you can find
    around human behaviour, and I would be supplementing it with the
    subjective evidence from the testing. Yeah. It's a juicy one, though.
    That's fun. That's a really interesting one.

    View Slide

  11. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 66
    STEVE BATY: Question here?
    >> Thank you. I wanted to ask, how do you combine this method of
    note-taking and synthesising findings with all the type of information that
    can come from usability testing? For example, if you run it all on Mace
    and you get graphs, or heat maps and whatever information you can get,
    how does that complement this?
    AMELIA PURVIS: Beautiful question. So, the other data that you're
    collecting, how does it complement this approach? In what I've done, this
    has formed part of that picture. Again, it's how you're going to build your
    narrative? What story are you telling and what data supports that
    narrative? This will give you a very yes-no response. Did this work or
    didn't work, yes, no? That might supplement it. You might have this page
    and then you might have the heat map to further support your evidence.
    Your finding might be, "People did click on this button," and then this
    would be a green finding card, like, "People did click on the button." Then
    you would have a heat map to show, "People did click on..." Whatever, or
    they focus on specific areas. So, it depends what your story is and how
    that's going to build and grow through that presentation, and who your
    client is as well. Different clients will engage with different things, so you
    need to know what are gonna be those triggers or those - not
    triggers - what are gonna be those key things that your client really want
    to see. Are they really quantitative-focused, are they qualitative-focused?
    Are they really visual? I would tailor it to them. Did that answer your
    question?
    STEVE BATY: Other questions from the audience? While you're

    View Slide

  12. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 67
    considering that, I have... Down here. While we bring the microphone up,
    I will ask a question, which is, are there scale considerations? Like, does
    this work well for 10 or 20 participants, but at 50 or a hundred it starts to
    become unwieldy?
    AMELIA PURVIS: I would be asking why you're speaking with 50 or a
    hundred people for a usability test.
    STEVE BATY: Well, yeah.
    AMELIA PURVIS: (LAUGHS) This... The scalability scales in terms of this
    approach, scalability in terms of participants, I would run this probably up
    to 20 people. Over that, I would be really asking why.
    STEVE BATY: Why. Yeah.
    AMELIA PURVIS: Why do we need to speak to that many people? All
    you're gonna have is the boxes are gonna keep growing and you're gonna
    hit saturation point.
    STEVE BATY: Fair enough. Down the front.
    >> Do you use a combination of tools? Like, I see this cheat sheet is
    based on Miro, and I have a habit of using Dovetail. I put the transcripts
    in and it gives categories and then colour-coding - very convenient. So, I
    put my insights on Miro board, and as well Notion is very useful. What do
    you think about having a combination of tools to summarising your data
    and finding insights?

    View Slide

  13. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 68
    AMELIA PURVIS: I would... If the question is, "Would you use different
    tools?" Or you want to use the same tools, Dovetail and Notion and Miro
    to synthesise...?
    >> A combination of tools to work smartly.
    AMELIA PURVIS: For the one project?
    >> That's my habit. I use two...
    AMELIA PURVIS: My question would be, "Why?" If you've got one piece of
    data and the data is the findings from your usability test, if you're
    spreading that out over different tools, what is the tool not giving you?
    Because for me, I need to be able to see the data in one place, and then I
    can mix it up and play with it and mash it up. This, for me, Miro, is really
    flexible for me in that sense. I would be thinking, "Are you spreading
    yourself too broad?" And, again, starting at the end, how are you going to
    find the insights and the narratives?
    >> I realise this is personal habits. So, I put the rough stuff in Dovetail
    and highlight the important stuff and colour-code them, categorise them.
    Put in cards. And then I put the very important insight, like chunk of
    sentences, put in the Miro board, sticky notes, group them, box them,
    label them. That's my way of doing it. I don't know how do you think
    about it, but it does come back to me. I can complicate the process if I
    don't use it wisely!
    AMELIA PURVIS: Yeah. Yep. And you've gotta do what works for you. I
    think this is an invitation to try something that works for me. Like I said, I

    View Slide

  14. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 69
    have a very short attention span. I like things to be very visual, very
    quick, very easy. I want to get to the end as quickly - not as quickly - I
    want to get to the end as efficiently as possible. So, this works for me.
    But by all means, if you can integrate it into your process or if you have a
    better process, do what works for you.
    >> Thank you very much.
    STEVE BATY: Question at the back of the room.
    >> Hi, Amelia. Have you tried Miro's new, sort of, AI tool to see how
    those insights can be grouped?
    AMELIA PURVIS: I haven't yet but I'm excited to. Yeah, good callout.
    STEVE BATY: Can you tell us more about that feature? What is that tool?
    AMELIA PURVIS: The AI?
    STEVE BATY: Yeah.
    AMELIA PURVIS: I can't 'cause I haven't used it! (LAUGHTER)
    STEVE BATY: Fair enough. Any other questions? One over here.
    >> Hi. I can see how this would work particularly well for usability testing
    because you've got the screens there already and you can kind of know
    what people are gonna provide feedback on. But have you been able to
    do it for discovery research, when perhaps you don't really know the

    View Slide

  15. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 70
    groupings initially?
    AMELIA PURVIS: Yeah. I thought of the same thing, actually. We're doing
    discovery research at the moment and I wondered whether the findings
    card, whether this was gonna be particularly helpful for the discovery
    research. Short answer is it's not, particularly. Just because... Like you
    said, with the usability testing, you have those really clear questions,
    right? Do they click here? Do they notice this? Can they find that?
    Whatever it is. With discovery, it's so much messier. I do always do the
    grouping at the end, so this sort of thing. My discovery boards always
    look like this. There are bits and pieces everywhere. That's where you get
    to the nitty-gritty, what's our insight and narrative? The difference in
    finding insight is a whole other presentation. But I know that, for me, this
    doesn't work particularly well with discovery, yeah.
    STEVE BATY: We have a question via Zoom, asking, "At what point does a
    data point become a pattern?" Like, how many times does it have to
    repeat before you start to consider it a pattern?
    AMELIA PURVIS: Ooh. This is getting... There's different responses for this
    question. There's the statistical, mathematical response to this question.
    And then there's a "what happens for me in practice?" question. So, how
    much do you have to hear before - how much does a data point have to
    repeat before you get a pattern?
    STEVE BATY: Yeah. Before you start considering it a pattern, I think was
    the question that was asked.
    AMELIA PURVIS: For me personally, I'm gonna ignore statistics question,

    View Slide

  16. CaptionsLIVE Raw Transcript
    _____________________________________________________________________________________________________
    Note that this is an unedited transcript of a live event and therefore may contain errors. This transcript is the
    joint property of CaptionsLIVE and the authorised party responsible for payment and may not be copied or
    used by any other party without authorisation.
    Page 71
    because I was never good at maths. For me, personally, I use a
    saturation kind of calculator. Are we getting to the point that we are
    just - again, back to the numbers of how many participants are you
    speaking to? If we're just getting - and specifically for usability testing
    here - if we're just getting the same responses over and over and over
    again, we've hit saturation, that's it. We don't need to keep on going. We
    know that data point is pretty good. We do need to pay attention to those
    outliers, because sometimes we get those really juicy bits of information
    from those edges of the bell curve outliers. They can be really valuable.
    But, for me, once we've got saturation, we're pretty happy.
    STEVE BATY: Excellent. Alright, please join me in thanking Amelia. Thank
    you. (APPLAUSE)

    View Slide