Winner Spotlight: Handsfree Learning Takes a Hands-Off Approach to Education—Literally
There’s been considerable innovation in edtech; it’s just been relegated to one side of the education house, says Josh Salcman. What’s been left in the dust is the kind of learning that doesn’t come from books and texts, but from doing and practicing.
So he launched Handsfree Learning, a platform that gives students and teachers video-based tools for what he calls learning-by-doing disciplines. As the name implies, Handsfree’s technology uses voice commands in order to allow the students’ hands to be free to rehearse surgical tactics, mechanical maneuvers and other skill sets.
Salcman competed in and won the education category during last week’s Challenge Cup San Francisco. Now he’ll go on to the Challenge Festival in D.C. in May. Salcman spoke after his win about how the technology works for both pupils and teachers and what he’s gotten out of each iteration of the product.
Your sphere of education is learning by doing. Can you explain what disciplines fall into this heading and how big of a space this is?
It’s a really interesting landscape because it comprises a number of fields that you wouldn’t necessarily think of as being linked together. These are all disciplines where you have to develop muscle memory, microskills and skill clusters and that happens by practicing and practicing and practicing. We see that pattern in fields like surgery, dentistry, dental hygiene, cosmetology, culinary arts and even things like auto repair. It’s not just entire fields but also sets of skills that are integral to even more fields.
And when we think of how big of a landscape that is, we look at data from the Department of Labor and what are predicted to be the fastest growing and most in-demand fields over the next decade. There’s actually a database by the Department of Labor where people have provided information about how much they use their hands in the course of their job. We’re looking at a certain threshold. And then we look at what level of training is required to be able to have a position in that field. There, we’re looking at whether it requires a certificate, undergraduate, graduate degree.
When we look at all of that we come out with over a $1 billion market opportunity. Some of those numbers we’ll find out more about as we progress. The bottom line is that there’s a lot there which is why we’re very excited.
The technology itself has a video component and a feedback loop for students to interact with instructors. If I’m a user, how do I leverage all of that to learn?
The audiences that we’re primarily focused on are students and teachers. From the point of view of a student, when you’re trying to learn a new technical skill, the typical pattern is you observe someone who’s already mastered that skill. Then you have to essentially mirror what they’re doing and practice. You cycle back and forth between watching them and doing. Then at some point you need to demonstrate to someone else the level of mastery that you’ve achieved. So our technology supports that process. We made a mobile app for iOS and Android for students to use that gives them the ability to control the video-based instructional content without using their hands. We’re developing and refining a voice control interface so students can say, “Pause, fast forward or rewind,” and, “Go back to the previous section,” or, “Continue.”
That’s the listening phase. Then, as they’re practicing, they may have an assignment where they’re required to demonstrate their skill. At that point they engage a different mode in the app, which is the recording mode so they can record a video of themselves and then submit that as part of the assignment.
If you think of that in relation to disciplines where you learn by reading something like math, you practice to the point where you can do the problems. Then you do your homework and turn the paper in. This is a pretty similar process except it’s all video-based.
From the teacher point of view, they have access to a web app that allows them to see the student submissions and then go in and assess the videos according to rubrics and checklists. They can do the assessments and provide feedback–general comments and to specific points in the video. If 90 seconds in they saw he saw the student doing something either really good or bad, they can give feedback specifically tied to that time stamp. Once they’ve completed the assessment—to go back to the analogy of the math homework—it’s like they’re returning the page back to the student, only via the mobile app.
Can you explain where you are in the development of this and how vetted the features are? You mentioned refining aspects like voice control. How much testing of Handsfree have you done so far?
The instructional platform is now in its third iteration. We started out by building the first prototype in summer of 2014. That was actually in the Google Glass platform and, since then, we’ve moved to a more inclusive technology delivery platform so that we’re now using mobile phones. We were able to learn a lot from the Google Glass version and then apply it to the alpha version. In Android we did some internal testing to refine the usability of the interface and we’re learning as we go, as far as content structure.
The next version of the mobile app is currently under development and that, combined with the teacher-facing app, will be piloted this summer. We’re piloting it at a few institutions.
How do you make money off of this? I believe the model you’re looking at is per-student/per-course?
It’s pretty straightforward and a model that’s been used successfully by multiple publicly traded education companies. Basically it is a license model where we’re working with an institution to provide their students and teachers access to the platform and, as you said, it’s a per-student, per-course model. The price point we’re starting at is $25 to $50 per student per course. That price may go up or down but it depends on the dynamics around pricing for the courses. We’ll make sure we’re in line with the overall pricing model of the courses themselves.
And then, in addition to that, there’s also implementation fees, which are more one-time per institution, but there also may be fees associated with us collaborating with our partners on developing the content. That’s likely to be on a per-course basis.
For you personally, this is your second edtech venture. How did founding Virtual Nerd help pave the way for Handsfree?
Yes, as you said, Virtual Nerd was my first company. It was a great experience. It was K-12 focused and the thing that was really valuable was the focus that we had on the product and the user experience, especially from a student point of view. The platform, which we ended up patenting, was the first and only that allowed students to interact with instructional content in the form of video in a way that was personalized to their individual needs. They did this by navigating in a nonlinear structure depending on what they did or did not understand within the videos. That experience, using video for instruction, is obviously very relevant in Handsfree.
Let’s talk about your Challenge Cup experience. Last year you were a judge and this time the tables were turned so that you were competing. What was that shift like?
Humbling. It was definitely a very different experience. I was very glad to have been able to see it from both sides. It probably gave me some advantage to sort of know what the judging process looks like.
One of the things that’s unique about Challenge Cup is the format; it’s exciting for everybody because of the one-minute pitches and then going to the 5-minute pitches. But one-minute pitches are incredibly stressful because the margin for error is zero. For me, being through the process before as a judge and having the knowledge that the one-minute pitch is not the only basis in which you’ll be judged, helped. It mitigated the anxiety a little bit. But still, being on that stage with the bright lights shining and not being able to see the audience very well—it’s a unique experience.