Lessons Learned
01   Letting research findings guide the product
I realized that I went into this research with an assumption that most people would want something that would help them keep their plants. So when my survey results showed that most plant owners, in fact, do not want help with keeping their plants, I was shocked. This experience showed me the dangers of confirmation bias, and how important it is to let the research findings guide my design decisions.
02   Phrasing behavioral survey questions
In hindsight, some user-behavior-related questions in my survey could have been worded differently. I used words like "want to" and "like" - which can figure out a users' preference, but not their actions. For example, just because a respondent says they like talking to other people about their plants, does not necessarily mean they often do.
03   Reverse-categorization in affinity mapping
In my first rendition of my survey, respondents started to submit their multiple choice responses in a short answer format, because their plant experiences were too unique to be categorized into the ones I provided.
So, in my final version of my survey, I wrote all questions related to plant-keeping experiences to be short answer questions, then organized them into categories afterwards. Doing so produced much more accurate and insightful results as respondents were able to write down exact descriptions of their experience.
Reflection
Takeaways
✍️  Asking the right survey questions
📱  Wireframing  and prototyping
👯🏻  A/B Testing with lo and hi-fi frames

Next Steps
📱  Expanding frames
👯🏻  More rounds of user testing
📈  Get Net Promoter Score(NPS)
Back to Top