This tutorial is going to cover how to implement a basic augmented reality application with Vuforia SDK on Unity. Throughout the tutorial, you are going to learn how to generate a scene compatible with Vuforia and how to implement various scripts, including ray-tracing, in order to be able to interact with the object within the scene.
Creating a Developer Account
Before starting off, you need to register for a Vuforia developer account. Go to the Vuforia Developer Portal to create an account.
Once you create your account, it is time to download the Unity package of Vuforia. Go to the download page and download the specific package for Unity.
Creating an Image Target
An image target is required in order for a device's camera to recognize a reference and track it. The orientation and actual size of the target image directly affect the same attributes of the superimposed images.
Any image can be assigned as a target. However, the features of the target image effectively determine how well the target is tracked. In this tutorial, we are going to use an online tool to generate feature-rich target images. Generate an image target by using the Augmented Reality Marker Generator online tool, and save the image on your computer.
Preparing the Unity Scene
Vuforia Package
Create a new 3D unity project and then double-click the Unity package you downloaded for Vuforia. This will prompt the following window. Click All to select all the content of the package, and then hit Import. This is going to import all the necessary tools for the AR application.
Image Target
The next step is to import the image target files. In order to obtain the image target files, we need to use the Vuforia developer page.
Go to the Vuforia Developer Portal and then log in to your account. Under the Develop tab, you will see the Target Manager. First you need to add a database. Use the designated button and add a database.
Name your database as you wish, and select Device as the type.
Now we are ready to add a target in this database. Click on the Add Target button, and the following window will appear. The type should be selected as Single Image in our case. Select the image target that we generated by using the online tool. If you have any trouble uploading the file, try converting it to .jpg file format and uploading again.
Width is a crucial parameter. This should match the real size of the target image that you will be eventually printing on paper. I set the width to 40. There is no unit since it matches the unit of your scene.
Once you add the target into your database, Vuforia rates your target. With the target image generator we used, features are high and therefore it gets 5 stars, which means it's easy for Vuforia to recognize and track this target.
Now you need to download this database. To do so, hit the Download Database button and select Unity Editor as the development platform.
Once you've downloaded the database, double click on it and import all the content to the Unity scene we are working on.
ARCamera Object
We start by adding the ARCamera object of Vuforia to our scene. To do so, simply follow the Assets > Vuforia > Prefabs directory and add the ARCamera object by dragging and dropping to the scene.
Select the ARCamera object and under the Inspector tab, you will see the App License Key section. This license key will be obtained from the Vuforia developer portal.
Log in to your Vuforia account on the Developer Portal and under the Develop tab, you will find the License Manager section. Click the Add License Key button. On the following page, select Development as the project type and define an application name for your project. This name is not crucial, and you can alter it later on if you wish.
Hit Next, and then confirm your license key on the next page.
Select the license you've just created. It will reveal the license key that we need to copy and paste to the ARCamera object. Copy this license and then paste it into the App License Key section under the ARCamera settings.
Under Database Load Behaviour, check the Load ARdemo Database option. Once you check it, another option called Activate will appear right under it. Check this option as well.
The ARdemo part of the Load ARdemo Database option depends on how you named your database.
Image Target Object
The second object we need in our scene is the Image Target object.
Under the Assets > Vuforia > Prefabs directory, you will also find the "ImageTarget" object. Add this object to your scene and select it to reveal the options.
Under the Image Target Behaviour section, you will see the Database option. Select your database from the dropdown menu and select the specific image target you want to assign to the image target object from the "Image Target" option's dropdown menu. If you have multiple image targets for one database, they will all be listed here.
The width and height parameters will be automatically set depending on the value you assigned when creating the image targets in Vuforia's developer portal.
Augmenting Graphics
The next step is to create the graphics and tie them to the image target. You can either create a GameObject or you can import your own 3D model into Unity and use it. In this tutorial we are going to use a simple 3D cube object for the sake of simplicity.
Add a cube object to the scene as shown in the following figure.
Set its x, y and z parameters for the Scale option under Transform to 40, so that it matches the size of the image target we generated.
If you set another width value for your image target when generating it in the developer portal, use the value you selected in order to match the full size of the image target.
The last step to get our AR app working is to set the cube object as the child of the image target. To do so, simply drag the cube object and drop it on the imageTarget object under the hierarchy menu.
The final state of the scene should be as follows:
Now hit the Play button to run your application. It will use your webcam. Either get the target image printed or open it from your phone so that Vuforia can detect it through your webcam. I did the latter and opened the target image from my phone.
Here is the actual screenshot of the view of the webcam. You can see that the cube object covers the whole target image, since we matched the scaling factor values both for the 3D object and the target image.
Interaction Scripts
So far, we've developed a basic AR application that recognizes and tracks our target image and displays the designated 3D graphics. However, for a complete AR application, we also need to be able to interact with the objects, augmenting the reality.
For this purpose, we need to be able to detect where we clicked—or touched, in the case of a mobile device. We'll do this by implementing a ray-tracer.
First, create a folder named "scripts" under Assets to keep everything organized. We are going to store our script files in this folder. Then create a C# Script file in this folder. Name it "rayTracer". Naming is important due to the fact that the following code should match this specific file name. If you prefer to use a different name for your script file, you should also change the provided code accordingly.
Ray-Tracer Script
Copy and paste the following code into the C# Script file you have just created and named "rayTracer".
using UnityEngine; using System.Collections; using System.Collections.Generic; public class rayTracer : MonoBehaviour { private List<GameObject> touchList = new List<GameObject>(); private GameObject[] touchPrev; private RaycastHit hit; void Update () { #if UNITY_EDITOR if (Input.GetMouseButton(0) || Input.GetMouseButtonDown(0) || Input.GetMouseButtonUp(0)) { touchPrev = new GameObject[touchList.Count]; touchList.CopyTo (touchPrev); touchList.Clear (); Ray ray = Camera.main.ScreenPointToRay (Input.mousePosition); //Debug.DrawRay(ray.origin, ray.direction*10000, Color.green, 10, false); if (Physics.Raycast (ray, out hit)) { GameObject recipient = hit.transform.gameObject; touchList.Add (recipient); if (Input.GetMouseButtonDown(0)) { recipient.SendMessage ("touchBegan", hit.point, SendMessageOptions.DontRequireReceiver); } if (Input.GetMouseButtonUp(0)) { recipient.SendMessage ("touchEnded", hit.point, SendMessageOptions.DontRequireReceiver); } if (Input.GetMouseButton(0)) { recipient.SendMessage ("touchStay", hit.point, SendMessageOptions.DontRequireReceiver); } } foreach (GameObject g in touchPrev) { if(!touchList.Contains(g)){ g.SendMessage ("touchExit", hit.point, SendMessageOptions.DontRequireReceiver); } } } #endif if (Input.touchCount > 0) { touchPrev = new GameObject[touchList.Count]; touchList.CopyTo (touchPrev); touchList.Clear (); foreach (Touch touch in Input.touches) { Ray ray = Camera.main.ScreenPointToRay (touch.position); if (Physics.Raycast (ray, out hit)) { GameObject recipient = hit.transform.gameObject; touchList.Add (recipient); if (touch.phase == TouchPhase.Began) { recipient.SendMessage ("touchBegan", hit.point, SendMessageOptions.DontRequireReceiver); } if (touch.phase == TouchPhase.Ended) { recipient.SendMessage ("touchEnded", hit.point, SendMessageOptions.DontRequireReceiver); } if (touch.phase == TouchPhase.Stationary || touch.phase == TouchPhase.Moved) { recipient.SendMessage ("touchStay", hit.point, SendMessageOptions.DontRequireReceiver); } if (touch.phase == TouchPhase.Canceled) { recipient.SendMessage ("touchExit", hit.point, SendMessageOptions.DontRequireReceiver); } } } foreach (GameObject g in touchPrev) { if(!touchList.Contains(g)){ g.SendMessage ("touchExit", hit.point, SendMessageOptions.DontRequireReceiver); } } } } }
This script detects both mouse clicks if you are working on the Unity editor and touch inputs if you have deployed your application on a mobile device with a touch screen.
Once you've created your rayTracer script, you need to activate it by assigning it to one of the objects in the scene. I selected the ARCamera object and added the rayTracer scripts as a component by using the Add Component button under the Inspector tab.
Object Material
Now we are going to assign a material to our Cube object and change the color of the material upon interaction with the cube.
Under Assets, create a material and name it as you wish.
Now assign this material by dragging and dropping over the cube object.
Interaction Script
Create a new C# Script under the scripts folder and name it "interaction".
Copy the following C# code into your "interaction" script file and then add this script file to the cube object as a component, just as we did with the "rayTracer" script file. However, this time it should be a component of the cube object—this is important in order to be able to only interact with the cube object.
using UnityEngine; using System.Collections; public class interaction : MonoBehaviour { public static Color defaultColor; public static Color selectedColor; public static Material mat; void Start(){ mat = GetComponent<Renderer> ().material; mat.SetFloat("_Mode", 2); mat.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.SrcAlpha); mat.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha); mat.SetInt("_ZWrite", 0); mat.DisableKeyword("_ALPHATEST_ON"); mat.EnableKeyword("_ALPHABLEND_ON"); mat.DisableKeyword("_ALPHAPREMULTIPLY_ON"); mat.renderQueue = 3000; defaultColor = new Color32 (255, 255, 255, 255); selectedColor = new Color32 (255, 0, 0, 255); mat.color = defaultColor; } void touchBegan(){ mat.color = selectedColor; } void touchEnded(){ mat.color = defaultColor; } void touchStay(){ mat.color = selectedColor; } void touchExit(){ mat.color = defaultColor; } }
In this "interaction" script, we are referring to the material of the cube object as "mat".
We created two different material objects named defaultColor
and selectedColor
. defaultColor
is selected to be white, as the RGBA parameters indicate, which are (255, 255, 255, 255)
.
We initialize the cube object's material color as defaultColor
by the following line:
mat.color = defaultColor;
We have four different functions for four different states:
touchBegan()
is called at the instant you touched on the object.touchEnded()
is called when you release your finger.touchStay()
is called right after you touched on the object—this function followstouchBegan()
. So, if you assign different colors to your material in these functions, you are unlikely to see the color assigned in thetouchStay()
function, since it is the very first instant the touch is recognized.touchExit()
is called when you drag your finger out of the cube object's surface, instead of releasing your finger, which calls thetouchEnded()
function as explained above.
In our code, when we touch on the cube object, we assign the selectedColor
object to mat.color
, which is the color of our cube object's material.
By assigning the selectedColor
within the touchStay()
function, we make sure that the color of the cube object will be equal to selectedColor
as long as we keep our finger on the cube object. If we release our finger or drag it out of the cube object, we assign defaultColor
to the material's color parameter by calling the touchEnded()
or touchExit()
functions in accordance with the action we took.
Now run the project and click on the cube object once the target image is recognized and the cube object has appeared. It should turn red and white again when you release your click or move it out of the cube object's surface.
You can experiment with different colors for the four different actions to comprehend them thoroughly.
Conclusion
In this tutorial, we've gone through an introduction to the Vuforia SDK for Unity along with its developer portal, and we've seen how to generate a target image and an appropriate license key.
On top of that, we generated custom script files in order to be able to interact with the augmented graphics. This tutorial is just an introduction to enable you to start using Vuforia with Unity and creating your own AR applications.