A service to help a computer-illiterate person find and apply for jobs with the help of a kiosk and voice assistant. Job openings are curated based on the user's profile information. Users can apply for them with a single touch. They can also track the status of all their applications.View prototype
Market Research, Product Design, Voice UX Design, Branding, Visual Design
Paper, pen, Adobe Illustrator, Sketch, Figma, UsabilityHub
User journey map, User stories, Voice Persona, Dialog flows, Fallbacks, Paper sketches, Sitemap, Lo-fi Wireframes, Interaction Flow Diagram, Style Guide, Hi-fi mockups, Prototype
I came across this article a while ago and was immensely moved by it. It sheds light on the story of how a man helped a homeless person apply for jobs. Applying for jobs these days is a difficult task for people with limited access to resources. Most companies, organisations, and stores require online job applications. Ironically, how do we use technology to solve this problem?
This issue can be resolved with the help of a voice assistant that can help the user create a job profile and find suitable jobs. Since voice alone would not help accomplish this, the experience can be augmented with an easy-to-use and intuitive kiosk interface.
In order to understand the prevalence of digital divide, I read this article published by the Nielsen Norman Group which discusses a study done between 2011 - 2015 to test the computer skills of ~216,000 people across 33 countries. It was found that 26% of adults were unable to even use a computer. How much more of a struggle must it be for the less fortunate?
Are there any existing solutions in the market?
There are plenty of facilitators like Apple's Speech features on iOS and Mac OS X, Android's Select to Speak, Text-to-speech output, Magnification etc.
I used Android's Select to Speak feature on my phone to fill the Sign Up form on Glassdoor's website. The form appears in a modal window and ideally, only the form's contents should be read out. However, the screen reader scans the entire page's contents including what's beneath the modal window. This doesn't make for a great experience.
Doing the same with Google's Voice Assistant only fetched the website link for me and I was expected to open the page and fill up the form myself.
Some of the currently available speech-to-text options are Speech Recognition on Windows, VoiceOver utility on iOS & Mac OS, Google Voice Recognition for Google Docs, third-party Chrome browser-based apps like Voice to Text etc.
A Chrome browser extension that works better than the rest is SpeechPad. However, this requires manual selection for each field from the browser's context menu to provide input through voice. This is a tedious process.
Some retail stores let people apply for jobs via their in-store kiosks. Managers can quickly screen a large number of applications. Even though this process is a seemingly quick one, going to every single store to apply for jobs would be exhausting. Ideally, we want a single place for their job seeking needs!
The underlying issue is that currently, there is no easy method to communicate with the Web using voice, especially for a technologically-challenged person.
There's a fairly large section of the population that is technologically-challenged.
Homeless people are limited by accessibility to resources even if they're tech-savvy.
With all the research I'd done, I decided to define some concrete goals to tackle this problem.
Ease the job application process for the technologically-challenged using voice interaction.
Increase accessibility to the application process.
Make it convenient to track job applications and follow up with employers.
Donning the hat of a product manager, a few solutions came to my mind in order to solve this problem:
A browser-level feature using voice to talk directly to users.
A system-level feature integrated into operating systems which allows screen readers to read out exactly what users want to hear and receive user input.
A stand-alone voice application that can be installed on mobile and desktop devices.
These solutions would have to be highly context-aware to work well. While they are programmatically easier to implement, there's still the issue of accessibility to the Web. I needed a more holistic solution!
A stand-alone kiosk that can be installed in multiple locations and would let users interact via voice and touch to create a profile, apply for jobs and keep track of applications. It would be easily accessible to everyone and users don't have to own a device at all!
Some pros and cons associated with implementing this idea:
In order to empathise with the user and plan the product interaction, I worked on a user journey map. It helped me clearly define the product as a self-containing and wholesome system.
What with the user journey map spelled out, it was time to delve deeper into the user's engagement with the system. I created a workflow diagram and defined user stories to better represent the system.
Next step was to create a persona for the voice itself. I modelled a persona chart that defines the principles, characteristics and branding of the voice.
In order to humanise the system, I decided to give a name to the voice: Ly, pronounced as lee and derived from Kaamly.
Since this product is catered primarily towards the tech-challenged population, I decided to include a mix of the following features:
• a voice assistant
• corresponding text on the screen
• suggestion chips
• buttons for better guidance
Based on the research I did on existing Voice UX best practices, these are a few key points that I strove to model my conversational design around:
Conversation needs to be based on how people speak and not how they write.
Errors should be opportunities to use fallbacks to progressively guide the user.
Conversation should always end with a question or a confirmation for completion.
Use context to connect with the user.
Use vocabulary that reflected the brand identity.
I created a rough flow structure of the conversation, along with user's inputs and Ly's intents, for the purpose of a better design system.
Next, I had to think of error strategies or fallbacks, messages that a bot would convey if it is unable to understand a user input. This would help with task completion and to save time. There would be three kinds of fallbacks: global, context-specific and help.
I sketched out a few design ideas for the kiosk interface, a means of auxiliary support to the primarily voice-driven product. I was inspired by the idea of children's board books, i.e., to really break down and simplify the content while not sounding patronising to the users.
Option B was preferred by 88% of the users as they felt that the button was more accessible and visible when it's in the centre of the screen.
33% users' thoughts:
simpler to understand, less is more, less to focus especially for this target audience.
48% users' thoughts:
easier to see multiple jobs, resembles a bulletin board, comparable listing, concise, faster access even if there's more information load.
19% users' thoughts:
more reliable, everything is visible clearly.
Even though Option B fared the best and since these preference tests can't include voice interaction, I chose to go with Option A. Displaying just one job listing per page will allow the user to apply for it by saying "Apply for job". Multiple job listings on the screen wouldn't allow this.
Out of the 26 users that tested, 65% of them preferred Option B. Below are some of their reasons:
• even if they are computer-illiterate, they would have seen a keyboard
• it is more visually understandable
• a larger than normal keyboard (not necessarily QWERTY) is a lot more intuitive than the letter thing like old Nintendo games
• clearly illustrates keyboard
Out of the 26 users that tested, 65% of them preferred Option B. Below are some of their voices of reason:
Unfortunately, it's not easy to do early remote testing using wireframes with computer-challenged users so I resorted to asking computer-savvy testers to pretend to be otherwise. More concrete decisions are contingent on testing done out in the field.
"Kaam", written as काम , means work in the Hindi language. So I decided to name the product "Kaamly" which also sounds like calmly to give a sense of reassurance.
Following feedback sessions with senior designers and friends, I iterated through various design ideas.
How would more than, say, 6 interview slots (across multiple days) be displayed on the screen? How to ensure users get a quick view of their existing appointments?
In this iteration, it was possible to show more interview slots but this still wouldn't help the user figure out potential conflicts with any existing appointments.
Eureka! A solution that avoids scrolling or clicking through many "pages" of available appointment times by showing user's calendar with existing appointments. This also helps avoid conflicts. Assumption: Not more than 6 appointments would be available per day.
Taking the user through all the 6 steps of the account creation could potentially get tiresome. How to make the job search more instantaneous?
Give user an intermittent and preliminary option to perform a profile-aware job search before going through with the entire account creation process.
Since these users are computer-illiterate, aggressively simplify the task of getting users' information; each input step would be a screen of its own.
I wanted to design a more accessible keyboard. By choosing Android as the underlying OS, a customisable Android keyboard can be implemented by defining an XML file with the keys and their corresponding input values.
What with having seamlessly collected all the required user information already, the step of applying for a job is as simple and quick as the touch of a button.
Present users with suggestion chips to provide a guided and seamless interactive experience.
Included a reassuring onboarding guide with quirky illustrations to ensure users know what the process would be like.
Once all the design decisions were made and all screens had come to life, I linked all of them together and created a Figma prototype that would eventually be used for user testing.View clickable prototype
One of the hardest aspects of this project is to test it with the target audience. How do I find technologically-challenged users? One way is to go to a public location like a library, survey some users to understand their comfort level with technology and select a couple of them on whom to conduct the usability tests. This is going to be my next step as it would give me an initial idea of features that just may not work as I'd hoped they would and iterate further based on those results.
This has been a true passion project which made me step outside my comfort zone. I had to strip away my knowledge of how the Web works currently, simplify the steps to record information as little chunks and in a way that doesn't overwhelm the user. This required a special level of empathy.
Coming up with an unconventional design solution allowed me to think like a UX generalist. Choosing Android as the underlying OS eliminated the requirement for new hardware for the MVP version. This allows easier usability tests, thus reducing investment in terms of time, effort and money. Eventually, product success and funding can help make the decision to build out a stand-alone interface with its own proprietary hardware and software. I absolutely love how challenging this project has been and I hope to conduct some field testing to ensure my solution is indeed a viable one!