Virtual reality (VR) has been a hot topic in the L&D space over the past year, but rarely are you given a look inside a VR project. During a recent 45-minute session, I partnered with Ann Rollins to take a deep dive inside a rapid VR project that we tag-teamed to take from idea to inception. With no prior experience and no process to follow, we leveraged our ID skills and incorporated a variety of techniques to create an engaging VR solution that worked.
During the session we shared our experience—how we planned out the solution, the obstacles we encountered, our design process, the types of outside talent we had to tap into for development, the testing process, and final implementation. We also shared the types of feedback we received and if you don’t have time to listen to the full recording today, here is a list of the four key insights we shared that might help you launch your next VR project:
- Build a strong team with complementary skills.
- Focus on the content, not the technology.
- Start your design with a bird’s-eye view and then narrow your focus.
- Join VR groups on sites like LinkedIn, Facebook, and Twitter for support.
After the presentation, several great questions came up from the audience and I wanted to share them with you. Below are those questions and my best answers. This is an ongoing conversation, and I encourage you to keep the questions coming in via the comments section at the bottom of this page.
Q: Why did the customer ask for VR? Or did you suggest it based on their goals for the learner?
A: The premise of our project was to create an engaging experience using an emerging technology. We knew that our audience had seen and heard of the potential for using VR in corporate learning. While we knew that, for most organizations, VR is still a bit out on the collective roadmap, we wanted to show what could be built, today, using realistically available development resources.
Q: What are the cost considerations for a rapid VR project?
A: VR solutions will vary in their cost considerations depending on the type of solution; here we used our project as a guide.
The team will consist of your “typical” eLearning solution team and will include a PM, an ID, a media specialist (could be graphic artist, videographer, 3D artist, and engineer), and a programmer. This team will be working, in large part, simultaneously on their tasks. Here is the general breakdown:
- 20% PM
- 50% ID
- 100% Programmer (to build the user experience)
- 80% Media specialist
- 40% Engineer (to build the back-end web application)
If you were to take these hours and extrapolate them to hours/cost per week, then estimate your number of weeks for the development of your VR project, you’ll have a rough approximation of the scope for the project.
- Viewers – For our project we identified the audience and number of participants, then made our purchases based on the number of viewing devices (and in this case, number of iPhones that we rented for the activity).
- Development environment – Our framework was open source, but in the case of building a custom solution in a commercial framework such as Unreal or Unity, there would be a fee.
- Hosting – If your solution includes a back-end to manage multiple users, you will need a hosting solution to house the web application (site) and database.
Communication and Support:
Like many new technologies that include hosted components, there is a need to support the application. These activities can include:
- Communication plan for rollout – The solution will be a new experience so some user guidance will need to be written and distributed.
- User support – For a more complicated environment, you may want to consider a resource for answering questions.
- Application support – We often say that we “cannot SLA the Internet” and since things change (such as browsers, security protocols, etc.), you should plan a certain number of hours per month for the team to effect patches or enhancements.
- Hosting support – As with application support, hosting requires ongoing maintenance.
Q: How much time did both of you spend designing the solution? Did this take up your full-time job?
A: This particular VR project was part of an innovation project and was not tied to any specific customer. We worked on the project on our “spare time” as we all continued to support our respective projects. Ann and I spent approximately 25–30 hours designing the solution; this includes team meetings, brainstorming sessions, research, individual contributions, writing, and testing.
Q: Why did you have such a short timeline?
A: The timeline was a planned part of the project that was designed to stretch both our creative and innovation muscles. We wanted to see how much we could do, in the time we had. It’s not uncommon for us to get requests for quick-hit, highly technical projects at GP Strategies. Knowing what we can do, in a less than ideal schedule, also allows us to more readily scope similar projects that come in the door.
Q: If you had to do it again, what would you do differently?
A: For this particular project, there isn’t much we would change. We are able to work with ambiguous directions and create a working solution in the timeframe allotted. We all managed to stay on the same page and shared a joint flexibility that allowed the project to mold, bend, and reshape itself. However, had we been given more time to develop the solution, I strongly believe that we could have created a solution that included more of the “kitchen sink” ideas we had included in our bird’s-eye view of the project.
Q: In what types of learning environments do you think VR is a good fit?
A: VR solutions have a level of complexity and cost associated with them. We believe the learning environments that are best suited for VR are those in manufacturing, aerospace, and scientific space (just to name a few). In other words, VR fits well in environments where creating a safe environment is important to help learners develop new skills that require following specific procedures and steps to successfully complete a task without putting them at personal risk if they fail to properly complete the task. Pilots are a great example of a population that would benefit from the use of VR in their training space. Imagine you need to train a group of pilots on new flight instrumentation. You could create a virtual environment in which pilots could learn to use the instrumentation before taking flight, which would be critical to their success and safety.
Q: Please share examples of how VR is best implemented into learning.
A: We learn by making mistakes, but in some on the job learning situations, mistakes come at a high human, time, and resource cost. The ideal targets for VR in learning involve the need for an immersive, 360-degree component. Imagine high-stakes learning situations: Consider when knowledge experts do not exist yet (think repairs on exclusive manufacturing technology), or when an apprenticeship simply isn’t practical or safe (think combat environment awareness for general contractors creating and enhancing products for our military, or underwater welding on an oil rig).
In more attainable learning situations, think about scenarios where learners need to become familiar with equipment—perhaps not from an operating perspective, as those learners need more than VR could provide. In these cases, providing actual access is costly, for instance, access to an aircraft or other commercial transportation equipment. Many learners would greatly benefit from an immersive walk-around or crawl-through rather than consuming images from a PPT deck, especially if they are not co-located where manufacturing or servicing of these products happens.
Q: Can VR be standalone or is it better to integrate into a blended approach?
A: From an instructional design perspective, the VR solution should be part of a blended approach, as you want to have a way of assessing acquired knowledge. The solution we shared did have a debriefing where participants reflexed on the possibilities of the solution. If you were creating a VR game that is not tied to any sort of learning objectives, then presenting it as a standalone solution would be an option.
Q: How do you record real-life experiences such as sky diving?
A: As the technology becomes more readily available, a number of hardware solutions are also being introduced into the market to facilitate the development of the VR environment. To record a real-life experience, such as sky diving, you could use a wearable 360-degree VR camera.
Q: In the example shown during the webinar, could the players chat to each other online?
A: In the example we shared, the players wearing the Google cardboard googles were not in a shared space, so they were not able to interact with each other. We considered this in our design; however, the programming would have taken longer than the time we had to complete the project.
Q: What kind of 3D hardware are you using to view those VR environments?
A: For the solution we shared, we used the following hardware to display the environment:
- Windows laptop
- Wireless Internet
- Google Cardboard
I facilitated a demo of the solution at DevLearn 2016 and used:
- Windows laptop
- Wireless Internet
- Generic Knoxlabs Cardboard
- iPhones and android phones
Q: What kind of VR development software is used to develop those VR worlds?
A: There are three main gates for developing the VR world that we shared; we’ve listed them below and then provided a deeper dive.
- Create your images.
- Create your “cubic projection.”
- Program the device orientation to the six cubic projections and add clickable areas with associated actions.
Step 1: Create your images.
The living room was created using 3D Studio Max, a 3D modeling program. Autodesk 3D Studio Max and Autodesk Maya are the industry standards for 3D modeling and animation. Another option is Blender, which is a free professional-grade 3D modeling and animation program; it is also not necessary to render a 3D scene. You could also use panoramic photographs. The benefit of rendering in 3D software is that you don’t have to deal with stitching artifacts together. That being said, with the proper equipment and expertise, it is possible to get excellent results using panoramic photography.
Step 2: Create your cubic projection.
Once you have a panoramic image, either through photography or 3D rendering, it needs to be converted to a “cubic projection.” The standard output for panoramic images is generally an equirectangular projection. Imagine a Cartesian map of the world. The North and South Poles are stretched out while little distortion exists near the equator. This is an equirectangular projection. The width of the image representing 360deg is twice as long as the height which represents 180deg. A cubic projection breaks the image up into six separate square images representing the sides of the cube, which is easier (less processing power) for a computer to process skinning a cube than skinning a sphere with an equirectangular projection.
The tool we used to convert from equirectangular projection to cubic project was a free tool called Pano2qtvrgui. Most any panoramic stitching software should also have this ability. We used this tool because it was free. If you are looking to create a standardized workflow, it might make sense to invest in some stitching software.
Step 3: Program the device orientation to the six cubic projections and add clickable areas with associated actions.
The final application is basically a cube, skinned on the inside with the six cubic projection images. It is then rotated around, based on the device orientation. There is a little bit of tricky programming to determine what a user is looking at so we can make things clickable. Clickable items were actually invisible squares placed into the 3D scene since all objects are baked into the graphics of the skinned cube. When the clickable area was targeted and selected using the button on top of the Google cardboard, an action is executed, in our case, displaying a popup window.
Latest posts by Myra Roldan (see all)
- Webinar Q&A | CARDBOARD: Inside a Rapid VR Project - April 12, 2017
- Five Schoolhouse Rock! Learning Strategies You Should Be Using - March 2, 2017
- Webinar Q&A | Creating Sticky Learning With Low-Cost AR Solutions - January 23, 2017