VRguy podcast Episode 14: Prof. Russel Taylor on open source frameworks for VR

Russell TaylorMy guest today is Prof. Russ Taylor, formerly of UNC. This episode was recorded on June 1st, 2016.

Russ and I talk about the evolution of open source frameworks for VR – starting from the very early days, to VRPN some 20 years ago up to modern frameworks such as OSVR. We talk about lessons learned, drivers for open-source framework and what else is needed.

Subscribe on Android

subscribe on itunes

Available on Stitcher

Interview transcript

Yuval Boger (VRguy):     Hello Russ, and welcome to the program.

Russ Taylor:        Thank you, Yuval. Thanks for having me today.

VRguy:  Pleasure to speak again. You know, we speak quite often because we work together these days but for those that don’t know you, who are you and what do you do?

Russ:     I was, for 20 years, a research faculty member at UNC Chapel Hill while they were building different VR applications and hardware and head mounted displays and rendering engines. Recently, I’ve left and to do consulting and one of the groups I’m working with is Sensics for the OSVR project. I’m very excited about that.

VRguy:  I think that people spend a good chunk of their life getting to be a full professor, and here you are leaving all this behind. That’s very unusual, right?

Russ:     Yep. That’s a long story. I felt like about 2 years ago God tapped me on the shoulder and said, “It’s time to go somewhere else.” I was like, “Great! Tell me what’s next!”, but so far that was the whole message, and so I’ve been looking for how to get plugged in to disaster relief or other on the ground enterprises. Meanwhile, I’ve been having a great time consulting and working on open-source VR and other things.

VRguy:  When we started working together at Sensics, I think I took a visit to UNC to see you. At that time, I only knew you as the creator of VRPN, so let’s start with VRPN. When did it start, and what was the motivation for you to create it? What is it, of course?

Russ:     VRPN is the Virtual Reality Peripheral Network. Henry Fuchs was leading the VR program at USC along with Fred Brooks, and he said, “You know, we need to develop the next generation of API that everybody’s going to use at UNC. Why don’t you go see if there’s something already out there that we should pick up and be using, or if we should be writing something ourselves?”

                There was some NSF funding for that, and it started about a 6-month exercise of me talking with all the different developers at UNC, talking with the people that were around, looking at all of the available toolkits, looking at what the needs were, and realizing that it was almost embarrassing. There were maybe 10 or 15 toolkits out there, but looking at the needs it looked like we needed to do something a little bit different.

                VRPN was the beginning of that. It was going to grow into what OSVR is growing into now. We got up through the device level, and then I got called off to work on other projects and so it got halfway or a third of the way to where I thought it was going to be. It was supported for a number of years by the National Institutes of Health National Center for Research Resources, and so put out as open-source software. In that case, it started out as public domain and then went to a BSD-like license and eventually to the Boost license.

                Then NCRR was disbanded, and they were kind of voted off the island so our tax dollars are no longer supporting infrastructure, but we kept using VRPN and by that time, industry and other groups had taken over and it was really entrenched.

VRguy:  When did that effort start? When was VRPN conceived?

Russ:     I need to look that up. The first CVS commit was made in May, 1997.

VRguy:  Okay. VRPN created a device abstraction layer that, as you mentioned, was focused primarily on trackers and input devices. Is that correct?

Russ:     That’s right.

VRguy:  Then it got embedded into several devices. I think ART trackers have embedded VRPN server and probably a few others, right?

Russ:     Yeah, so there are actually several different vendors who include a VRPN interface in their standard release. OptiTrack does that. Vicon does that. vSpace does that. Of course, the UNC Hiball Tracker did that. That was, to me, pretty cool. The thing that really surprised me was that it was adopted by a bunch of different toolkits. The thing I talked about when I wrote the paper is it’s embarrassing to have written another API for talking to devices, but the fact that it had been adopted by other VR developers was encouraging.

                Panda at Disney was using it. Visual Molecular Dynamics was using it. Virtools, Avango, Syzygy, Worlviz, DART, VR Juggler, EnSight, CryVR … A bunch of different people were adopting it for their device layer. One of the things I realized is you can only standardize what nobody cares about. Nobody cares about how you get your bits from the tracker. A lot of people care about the scene graph. A lot of people care about rendering, but in terms of getting the devices, everybody was happy to use the same underlying base.

VRguy:  You mentioned VRPN server, so there must be a client. Why did the client-server architecture start? Are they on separate machines necessarily? How does that work?

Russ:     At UNC, we had PixelFlow, Pixel Planes 5, and Silicon Graphics Infinite Reality engines, so the client that was rendering using different machines down in the machine room. We had a video switch that would switch the inputs to the different head mounted displays. The head mounted display and its trackers were not co-located with the rendering engines.

                There were reasons that you would like the server and the client to be separate.  It turns out that a lot of times you would load up your world, you’d put in all the polygons or turn load your volume rendering, and the tracker would go down. Then you had to restart your program.  Also, for example, the Flock of Birds tracker would sometimes take 30 seconds to 45 seconds to reset properly. Once it was running, you wanted to leave it running.

                We basically put a PC at every station that would run all the tracking devices and the input devices and haptic devices and then we would run whatever rendering machine we wanted to run, and then connect them up over the internet. One thing that was surprising is obviously you would ask how much latency are you adding when you do this? Let’s look at the case of serial devices. Back in the day, the jiffies on the Unix kernel and on Windows were set about 10 milliseconds. It would take 10 milliseconds before the application would get out the characters, but if you build a custom kernel on the PC and ran Linux, you could actually get the characters out 9 milliseconds faster and then you’ve got a half millisecond network transport.

                You could actually get reduced latency using a remote server as compared to a locally plugged-in device.

VRguy:  You mentioned one of the things that surprised you was how widely VRPN was adopted. As you think back about the progression of VRPN, any particular disappointments? Anything that you say, “Well, I should have done differently,” or “I wish it would have been made or came out in a different way?”

Russ:     Well, so there have been some changes we made. At first, we developed a clock synchronization protocol that would run when you made the client-server connection. Over time, the Network Time Protocol came into wide use and they knew way more about that than we did, so we threw that out.  One of the big problems with the early versions of VRPN was if there were error messages, they were on some server somewhere else that maybe didn’t even have a console on it, and so you didn’t know what went wrong.

                In more recent versions, we’ve incorporated VRPN text messages – warnings, errors and info – that get passed across the connection and, by default, printed on the client’s side so you can find out about those things happening. One of the things that’s still not perfect about VRPN is when you have devices, you have a couple of choices. You can do a state interface or you can do a message interface, a delta interface. Delta works really well for trackers, where you get new information and then it’s all you need to know.

                For buttons, you have problems like, “When I connect it to the server, was the button pressed?” If it was, then the first thing my app gets is a button release event. This can make it very unhappy. We’ve since put in some state at startup. For example, if you connect and a button is up, the client immediately gets a button down event but the state wasn’t integrated to start with and isn’t super well thought out. There’s still some issues with it.

VRguy:  Good. One thing that happens I guess in an open source project is that you have a potential for community participation. How large was the VRPN community? Were there contributors that you were surprised that put code in?

Russ:     I don’t actually know how large the community was. There was over 100 people on the mailing list. Every once in a while, you’d get email from somebody at the French Railroad Company telling you they were using VRPN and had an error message, could you help with that. The error message would be in French. I’m like, “How does an error message in French come out of my code?” They were, of course, using a local Unix and so it was a segmentation fault.

                It was interesting to me that out of the 60 or so devices that are in VRPN now, about half of them were contributed either by vendors or by people that weren’t at UNC. That’s just how I found out, for example, about the Razor Hydra. One day this guy named Ryan Pavlik submitted a pull request that had this Razer Hydra. I realized, “Wow! This is the greatest thing ever.” I didn’t even know this device existed.

                There was a large community. I actually don’t know how to track that. We had tens of thousands of web hits and downloads, but …

VRguy:  Got it. About 2 years ago, I take a road trip to UNC to see you. The first time, I thought I needed to get your autograph, you were so famous. We did a little seminar for the UNC folks and presented this concept that we now call OSVR. I think at the time, we called it OpenGoggles. Do you remember what your initial reaction was to this OSVR story?

Russ:     Yes, I remember when you came to talk to me about OSVR. I thought, “Wow, this is really cool!  This is an opportunity to grow the rest of the API, that VRPN was the tip of the iceberg for.” I don’t know if you remember, but on my white board I had drawn this very complicated diagram of here’s the next steps you want to add and pointers to which things depended on what. You were very focused on, “Okay, how are we going to get this so that people can use their head mounted displays so they can actually start doing games?” I kept talking about all the different things that were going on.

VRguy:  Now we have OSVR and we’ve added all these different devices that were not fully formed or just did not exist in VRPN, display devices and eye tracking and skeleton and so on. Sometimes I think you refer to OSVR as VRPN 2.0. Well, what do you think should be in VRPN 3.0?

Russ:     I think OSVR is the thing that I would like to have built when I built VRPN. It adds a semantic graph that talks about what each button is, and this is something that a lot of people asked for in VRPN we never put in. There’s a graph calibrating how the world is set up locally on the VR device, distortion correction, prediction, direct mode, time warp, over sampling, all this stuff that’s really helpful to app builders and really is hard to do.

                That’s already in there, and that’s pretty exciting. If I think about what’s going in next, I think that a library’s job is to make difficult things easy. What are the next difficult things that haven’t been dealt with? One is how to do calibration. There’s calibrating the distortion correction but there’s also calibrating where is the sensor mounted on the head mounted display, where are the hands with respect to the head, how those things interact in the world.

                There’s the issues of picking and interaction. How do you start to deal with making it an abstraction that you’re grabbing objects, you’re shooting things, you’re doing whatever it is that you want to do – which requires carefully making some kind of a transformation hierarchy that doesn’t try to take over what the game developer is doing but that is interfaced between very simple operations and actions in the game.

                Starting down that path, you know the things that we’re working on in terms of a gesture interface. How do you couple location, behavior, and meaning all into one utterance?

VRguy:  If I think today about OSVR, we have the game engine and graphics engine plug-ins, which I think did not exist in VRPN.

Russ:     True.

VRguy:  Then we had the whole set of VR utilities, distortion correction, asynchronous time warp, measurement utilities, and then a device library which we were fortunate to be able to inherit a lot of the VRPN devices and then, of course, added devices both because they didn’t exist at the time or just new types of devices.

Russ:     Also you added the semantic graph on top of that, and so you know what the meaning of an analog is or what the meaning of a button is, not just that it exists, which is really helpful.

VRguy:  The semantic graph is like a directory structure for a file system, right? You don’t have to know the IP address of the server that you’re reaching into. You just have to say, “I’m going to this mail server,” or some other symbolic name.

Russ:     Yep, kind of like the /me/head for the head mount tracker or the /me/controller/left/button/0 or left finger trigger.

VRguy:  How about GUI in terms of menus, floating menus, fixed menus? Is that something that you’d like to see in OSVR or next generation?

Russ:     That’s on my list. It’s pretty far out there, because you need to have not only some kind of transformation hierarchy and interaction library, but also some kind of geometry description. Now as you try to do 2D in menus, you need to understand how to specify color, geometry and materials textures, which is very different in the different rendering engines. I think that would be helpful.

                We had that back in the day at UNC. Erik Ericson had put together one of those where you could pick menus and so forth, but there’s a whole dissertation involved in how do you do that well. Mark Mine did the body centric control system to do an immersive design interface. I think it’s hard, I think it’s important, and I think it’ll be useful but it’s beyond navigation and beyond interaction. It’s third on my list of next things to add.

VRguy:  As we come towards the end of this conversation, one of the concerns with these kinds of projects is scope creep, that they try to do too much. Do you remember any kind of things that were asked of you to do or say in VRPN and you said, “No, no, no. This is not the job of this library,” for instance?

Russ:     Yeah, the semantic graph is the lookup table. Maybe twice or 3 times a year somebody would say, “It needs to have some kind of semantic graph. You need to have some lookup table to find your servers.” I was of two minds about that, because it was something that some people care about. Some people want to use LDAP. Some people want to use HTML. Some people want to use a SQL database. If you pick one, it can make other people grumpy. Also, to me it’s not part of what VRPN does.

                It’s very appropriate in OSVR to have that, because you’re trying to figure out what devices there are, automatically detect them, auto configure them. That’s something that people pushed for that wasn’t right to put in VRPN but it’s very exciting to see it layered on top in OSVR.

VRguy:  My last question, you’ve built this great VRPN open source project, and now you’re an important part of building OSVR as part of Sensics. What advice can you give to people who are considering starting or building or contributing to these kind of open source frameworks?

Russ:     I guess one thing to realize is … whenever you’re using a library, you need to count the costs. You say to yourself, “Well, I want to use this thing.” You have to realize that there’s going to be a learning curve and there’s going to be bugs in the thing that you’ll have to fix. Your alternative is to write your own. If you write your own, you still have to learn new things and there’s still going to be bugs you have to fix.

                It’s really helpful when you encounter a problem to say, “Okay, let me describe as well as I can what the problem is,” and send in a really good problem report so that it can get fixed. Even better, “Here I have found the bug fix and support a pull request.” That benefits everybody in the whole community.

VRguy:  Perfect. All right, so Russ, this has been great. Thank you so much for coming onto the program. I look forward to continuing to work with you on building the OSVR framework.

Russ:     All right. Thanks so much.

 

Related Posts